Amazon Inc. recently announced the broad availability of Amazon Redshift – a fast, powerful, fully managed, petabyte-scale data warehouse service. Redshift, based on technology from ParAccel is aggressively priced solution at $1,000 per terabyte per year. Per AWS blog , it is highly cost effective offering that comes at about one-tenth of the cost associated with a typical data warehousing option and is expected to deliver 10 times the performance.

Redshift is targeted to replace the capital expenditure of building a database warehouse that requires regular maintenance and administration with a cloud-based service that’s instantly scalable and doesn’t require database administrators to maintain its speed and storage capacity.
The good:
Redshift is touted to bring business analytic capabilities to the cloud, simplifying the process of building traditional data warehouse processes at a fraction of the cost while allowing the customers to keep on using their analytics tool of choice viz. MicroStrategy, Jaspersoft or Cognos. 
The not so good:
While the use of existing analytics tools will cut the expense for the customers (no training required!) moving to Redshift but not many clients use columnar databases and if these clients shift to Redshift, they need to account for learning curve when it comes to Redshift architecture. Normalization and indexing do not really apply to columnar databases and you really need to understand your data and the way it is going to be used.
Unlike other columnar databases, Redshift doesn’t provide the ability to have multiple projections of a single table, each with different encoding and sort orders, you need to have separate tables.
Integration:
Redshift integrates nicely with a number of other AWS services. You can load data into a cluster from
 Amazon S3 or Amazon DynamoDB. You can also use the AWS Data Pipeline to load data from Amazon RDS, Amazon Elastic MapReduce, and your own Amazon EC2 data sources
Management:
Amazon Redshift manages all the work needed to set up, operate, and scale a data warehouse, from provisioning capacity to monitoring and backing up the cluster, and to applying patches and upgrades. By handling all these time consuming, labor-intensive tasks, Amazon Redshift frees you up to focus on your data and business insights.
Security:
Number of mechanisms are available to to secure your data warehouse cluster. Redshift currently supports SSL to encrypt data in transit, includes web service interfaces to configure firewall settings that control network access to your data warehouse, and enables you to create users within your data warehouse cluster. It also support encrypting data at rest and Amazon Virtual Private Cloud (Amazon VPC).
Competition:
EMC Greenplum, IBM Netezza and HP Vertica, BitYota and Treasure Data. The later 2 today run on on AWS infrastructure.
Conclusion:
The price is unbeatable, and getting started is easy so lot of small companies may get attracted to Redshift right away. However some of my large clients may be slow adapter, using Redshift for trail, development and test environments.
Few of my clients who today retain data for 6 months or less may find this inexpensive option a trigger for changing their retention policy so that they can please business by report on large data set for as long as they desire.
Sub-second performance which may be critical for some analytics applications may be a challenge though BI and general reporting use cases should work well
Lastly, some clients may have concern about the replacement time for node failure and mandatory maintenance window that are required to schedule each week.
For more on Amazon Redshift, checkout this introductory video: