Amazon Web Services has increased the number of simultaneous queries its hosted data warehouse Redshift can handle, improving performance in cases where many small queries are now forced to wait.
Amazon contends that Redshift lowers the bar for implementing and managing a data warehouse. The company provisions the infrastructure and tasks such as backups and patching are automated.
The latest upgrade lets users execute up to 50 queries at the same time, compared to a maximum of 15 before.
The ability to execute more queries is useful when a data warehouse has to handle lots of small queries. A queue that is configured to handle many queries has less memory for each one, but that is offset by the fact that smaller queries require less memory, according to Amazon.
Redshift data warehouses are made up of clusters of so-called Dense Storage nodes or Dense Compute nodes. The storage nodes allow enterprises to build very large data warehouses using hard disk drives for a low price per gigabyte, while the compute nodes let enterprises configure high-performance data warehouses using faster CPUs, large amounts of RAM and SSD storage.(The compute nodes are ideal for enterprises which have less than 500GB of data in their warehouse or whose primary focus is performance. The storage nodes are a better fit when performance isn’t as critical and storage demands are high but the budget isn’t, according to Amazon.
The storage nodes cost from US$0.850 per hour and the cheapest compute node is priced at $0.250 per hour.
The compute nodes were announced in January; in February Redshift was integrated with CloudFormation, which lets developers and systems administrators create and manage a collection of related resources.
Among competitors to Amazon Redshift are Microsoft, which lets enterprises run a customized version of SQL Server for data warehousing on Azure. The customization is based on the company’s Fast Track reference hardware configurations for on-premise installations.