How many columns should a SQL table have

Ad 1. As other have answered: 1024. But only if the total row size is 8060 bytes or fewer. So you would not be able to create 1024 bigint columns or 1024 datetime columns.

Ad 2. No. However, if you maximize the number of columns, then most likely each row will occupy one disk page. If you Select just a few columns from many rows, then many pages will have to be read. So if you only need to select a few of those columns (instead of all columns), then it will be relatively slow

Ad 3. No. You should normalize your tables. And that makes it exceptional to have more than say 20 columns per table.

What are the limitations for the number of columns and row size when using a Microsoft SQL Server database with P8 Content Platform Engine (CPE)?

Cause

The number of columns and the maximum row size limitations can create issues with some tables in P8 (ex. DocVersion, Generic) as properties or custom objects are continually added to the object store.

Answer

For the columns in a table, there is a maximum limit of 1024 columns in a table. SQL Server does have a wide-table feature that allows a table to have up to 30,000 columns instead of 1024. SQL Server 2008 and up have a Sparse Columns feature that can optimize storage when you have many columns with NULL values in the rows of the table. You are still limited to 8060 bytes of data per row. However, Sparse Columns and the wide-table feature in SQL Server are not support with P8 CPE at this time. One possible way to avoid the 1024 maximum number of columns is to create multiple object stores for different classes of objects, for different business needs.

A table can contain a maximum of 8,060 bytes per row. However, Row-Overflow Data is supported in SQL Server 2008 and up. Starting in SQL Server 2008, this restriction is relaxed for tables that contain varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns. The length of each one of these columns must still fall within the limit of 8,000 bytes; however, their combined widths can exceed the 8,060-byte limit. This is explained on the following Microsoft web page:
https://technet.microsoft.com/en-us/library/ms186981(v=sql.105).aspx
P8 also provides support for a CE "Long String" property data type, with the supporting column created as a large object (e.g. ntext) data type, which allows the strings to greatly exceed 8060 bytes.

[{"Product":{"code":"SSNW2F","label":"FileNet P8 Platform"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":"Content Engine","Platform":[{"code":"PF033","label":"Windows"}],"Version":"5.2;5.2.1","Edition":"","Line of Business":{"code":"LOB45","label":"Automation"}}]

Thanks all for your input... unfortunately, I didn't build these tables.  They were built as data warehouse tables because they didn't want users writing SQL, so to make life easier, they made tables that had lots of columns so end users could basically select * from factTable where createDate >= dateadd(year,-2,getdate()).  To avoid having to do "joins" they ETL everything into single tables for users too.  So there is lots of repetition.  This table likely has 900 columns of which a bunch are duplicated in other tables... but because they say "joins are bad" they ETL everything into one giant flat table.  Sometimes it's values... sometimes it is just a flag 0 or 1.

My thought at doing this was to create a sister table with a 1:1 relationship with the original fact table so that I could offload the user assignment and date and time columns to a separate table, but that's still pushing 314 columns into a table.  My next thought was to pivot those columns and have a table dedicated to statuses.  So for every "thing" I would have a 1:many relationship.  Each "thing" could have multiple users assigned (bonus it would allow multiple people with the same role to be assigned to a thing which can't be done now -- if multiple people are assigned, to a thing, the most recent assignment overwrites the existing assignment).  As it stands, eventually all of those 34 user assignment columns will be populated and each of those 280 date and time columns will have values in them as well.

I was just trying to wrap my head around this because yes you can have 1000+ columns per table... but my brain says it's a bad idea because locking.  As the activity on the table grows, each update statement is going to lock the entire row.  If we are rapidly reassigning and tagging dates/times, that's going to be a problem with growth.  Going verticle would increase the number of rows, but would lessen the row locking as things are inserted/updated.

For explanation purposes ... a "thing" is assigned to a user.  A user can do many different steps which is why there are 34 user assignment columns and 240 date and time columns.  One user could do 10-12 steps to a particular "thing".

How many columns should an SQL table have?

Database Engine objects.

How many columns is too many in SQL table?

There is a hard limit of 4096 columns per table, but the effective maximum may be less for a given table. The exact limit depends on several interacting factors. Every table (regardless of storage engine) has a maximum row size of 65,535 bytes.

Can you have too many columns SQL?

A common problem with SQL is the high probability of having too many columns in the projection. This may be due to reckless usage of SELECT * or some refactoring which removes the need to select some of the projected columns, but the query was not adapted.

Does number of columns affect performance in SQL?

Irrespective of the number of columns in Table A or Table B, with the above indexes in place, performance should be identical for both queries (assuming the same number of rows and similar data in both tables), given that SQL will hit the indexes which are now of similar column widths and row densities, without needing ...