Moving a database from on-premise to Azure SQL can offer cost savings due to increased up-time and decreased maintenance costs. However, for the lower price tiers of the DTU pricing model you may find a significant impact to query performance depending on your database. This is because Azure limits the speed of reading and writing to disk (IOPS), increasing the DTU on the database can become very expensive so it can be better to store the data in such a way to minimize the disk operations needed to complete the query. Also contributing to high disk IO is the small amount of memory that is allocated to Azure SQL, typically on a physical SQL server you would have about 16GB of memory which can be used as a cache to reduce disk IO. For Azure SQL it's not clear how much is allocated to the DTU based machines however some blogs mention about 512MB is allocated for the S0-S2 machines. Because of this small amount it is less likely for queries to be able to use the data caches in memory and more disk access will be used.
A covering index is similar to a normal index except that it stores a copy of the data with the index so no key lookup is needed. Adding covering indexes can greatly reduce the IOPS needed to complete a query, at the expense of disk storage. Essentially for each covering index that you add, you will increase the database size by the size of the table, this is because a covering index is like a copy of the table where the data is stored in a different order. This makes more sense on Azure than physical boxes because for Azure, storage space is much cheaper than DTU's. When SQL reads from a covering index the data will be in order so it will be a sequential read. For normal indexes that use key lookups, for each lookup it will need to read different pages/disk blocks which can contain unwanted rows. This equates to wasted IOPS.
A few notes, covering indexes only need to include fields that are used in select queries, though if you use "select *" all fields will need to be included. When adding new fields to a table they will also need to be added to the covering index, otherwise SQL will revert to using key lookups to get the additional field.
Over time, indexes will become fragmented making the disk access less efficient. With SQL server running on a VM you can use SQL server agent to schedule jobs to rebuild/reorganize indexes. However on Azure SQL there is no SQL server agent. Instead we can use Azure Automation to schedule a runbook which has some powershell to run the maintenance. There is a SQL script provided to run index optimize from Ola Hallengren. There are other blogs on how to setup azure automation so I wont cover that. Jobs can be scheduled to run during off peak hours when DTU is not being consumed by other tasks.