Splunk Search

Why is my appendcols search returning an incorrect count?

mackd
New Member

I have two separate searches that I want to group into one. When I use appendcols I get wrong counts for the search encapsulated within appendcols. Can someone clue me into what I'm doing wrong?

In the search below, "Provisioned Org" returns an incorrect count, than when I run it on its own.

sourcetype=logs statusCode=400 "Org failure"  earliest=-1mon@mon latest=@mon| timechart span=1d count as FAILED|appendcols [search  sourcetype=logs   "Provisioned org"  earliest=-1mon@mon latest=@mon | timechart span=1d count as SUCCESSFUL]
Tags (2)
0 Karma
1 Solution

niketn
Legend

Appendcols will not be able to correlate too many events. Considering the fact that your are roughly trying to aggregate one months of data for successful and unsuccessful events, there might be more events than what can be handled based on your Splunk configurations (hardware and limits). You would notice two things search running too slow and older dates returning 0 counts.

1) You can either run appendcols for relatively shorter period of time like a week or single day.
2) If stausCode field or any other field for correlation is present for both successful and failed events then use stats/timechart command instead of any other correlation techniques like append, appendcols or join. Assuming 200 is successful and rest all (including 400) are failed.

sourcetype=logs "Org failure" OR "Provisioned org" statusCode=*  earliest=-1mon@mon latest=@mon| timechart span=1d count(eval(statusCode=200)) as SUCCESS, count(eval(statusCode!=200)) as FAILED
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

View solution in original post

niketn
Legend

Appendcols will not be able to correlate too many events. Considering the fact that your are roughly trying to aggregate one months of data for successful and unsuccessful events, there might be more events than what can be handled based on your Splunk configurations (hardware and limits). You would notice two things search running too slow and older dates returning 0 counts.

1) You can either run appendcols for relatively shorter period of time like a week or single day.
2) If stausCode field or any other field for correlation is present for both successful and failed events then use stats/timechart command instead of any other correlation techniques like append, appendcols or join. Assuming 200 is successful and rest all (including 400) are failed.

sourcetype=logs "Org failure" OR "Provisioned org" statusCode=*  earliest=-1mon@mon latest=@mon| timechart span=1d count(eval(statusCode=200)) as SUCCESS, count(eval(statusCode!=200)) as FAILED
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

mackd
New Member

Thank you. Yes, I did notice both conditions you mentioned - slow queries and 0 counts.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...