All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Make sure you are setting a valid label for the container.  Also, double check for valid severity and sensitivity being set on container.   You can check for errors when Splunk tries to create cont... See more...
Make sure you are setting a valid label for the container.  Also, double check for valid severity and sensitivity being set on container.   You can check for errors when Splunk tries to create container in SOAR. Run this SPL: index=cim_modactions error
thanks!
Thanks
No difference with inputlookup. fields is usually preferred if working with an index search that fetches actual events.
Data illustration could have saved everybody a ton of time reading your mind.  The solution is the same as I suggested earlier: kv aka extract is your friend.  But first, let me correct JSON error in... See more...
Data illustration could have saved everybody a ton of time reading your mind.  The solution is the same as I suggested earlier: kv aka extract is your friend.  But first, let me correct JSON error in your mock data:   {"cluster_id":"cluster","kubernetes":{"host":"host","labels":{"app":"app","version":"v1"},"namespace_name":"namespace","pod_name":"pod"},"log":{"App":"app_name","Env":"stg","LogType":"Application","contextMap":{},"endOfBatch":false,"level":"INFO","loggerFqcn":"org.apache.logging.log4j.spi.AbstractLogger","loggerName":"com.x.x.x.X","message":"Json path=/path feed=NAME sku=SKU_NAME status=failed errorCount=3 errors=ERROR_1, ERROR_2, MORE_ERROR_3 fields=Field 1, Field 2, More Fields Here"}}   Now this is a compliant JSON.  Second, are you saying that your developers are so inconsiderate as to not properly quote key value pairs?  Like I said earlier, in this case, you need to deal with them first.  The best route is to implore them to improve log hygiene.  Failing that, you can deal with them in a limited way using SPL.  The following depends on the order of errors and fields. The field message is actually named log.message in Splunk. (Many other languages flatten JSON this way, too.)   | rename log.message as _raw | rex mode=sed "s/errors=(.+) fields=(.+)/errors=\"\1\" fields=\"\2\"/" | kv | table path feed sku status errorCount errors fields   Output is path feed sku status errorCount errors fields /path NAME SKU_NAME failed 3 ERROR_1, ERROR_2, MORE_ERROR_3 Field 1, Field 2, More Fields Here Here is full emulation of your mock data.  Play with it and compare with real data.   | makeresults | eval _raw ="{\"cluster_id\":\"cluster\",\"kubernetes\":{\"host\":\"host\",\"labels\":{\"app\":\"app\",\"version\":\"v1\"},\"namespace_name\":\"namespace\",\"pod_name\":\"pod\"},\"log\":{\"App\":\"app_name\",\"Env\":\"stg\",\"LogType\":\"Application\",\"contextMap\":{},\"endOfBatch\":false,\"level\":\"INFO\",\"loggerFqcn\":\"org.apache.logging.log4j.spi.AbstractLogger\",\"loggerName\":\"com.x.x.x.X\",\"message\":\"Json path=/path feed=NAME sku=SKU_NAME status=failed errorCount=3 errors=ERROR_1, ERROR_2, MORE_ERROR_3 fields=Field 1, Field 2, More Fields Here\"}}" | spath ``` data emulation above ```  
Try this one   index=<index> [inputlookup lookup_table | search NAME = "Toronto" | table ID]  
    I think we should use table instead of fields.    
This is somewhat confusing.  Do you mean to say that you have a multiselect token that evaluates into the search expression shown in the first code box, or is that one of multiselect values?  If the ... See more...
This is somewhat confusing.  Do you mean to say that you have a multiselect token that evaluates into the search expression shown in the first code box, or is that one of multiselect values?  If the former, I strongly suggest that you rethink the strategy because a user may well end up composing a token that evaluates into |table source index="myindex" sourcetype="pinginfo" source="C:\\a\\b\\c\\d\\e\\f f\\g\\h\\ı-i-j\\porty*" |dedup source This is probably not what the user wanted. Regardless, if your really, really want double backslash - I really can't conjure up a good reason for that even in a Microsoft world, you need something like index="myindex" sourcetype="pinginfo" source="C:\\\\a\\\\b\\\\c\\\\d\\\\e\\\\f f\\\\g\\\\h\\\\ı-i-j\\\\porty*" |table source |dedup source  
This dashboard (Traffic Search Dashboard) is accessible in Network Domain of Splunk Enterprise Security app. You can also create a similar dashboard with these inputs and use tokens to modify your se... See more...
This dashboard (Traffic Search Dashboard) is accessible in Network Domain of Splunk Enterprise Security app. You can also create a similar dashboard with these inputs and use tokens to modify your search.
I've seen someone use this traffic search function but can't find it myself: How can I access this traffic search function? I know that I can run a search to get the same result but would like ... See more...
I've seen someone use this traffic search function but can't find it myself: How can I access this traffic search function? I know that I can run a search to get the same result but would like to be able to use this handalso.
This is Splunk.  The answer is always yes:-)  In this case, it's much simpler than you think: index=<index> [inputlookup lookup_table where NAME = "Toronto" | fields ID]  
I think streamstats in the title throw volunteers off because it is hard to see how it relates to your requirement, which you describe quite well without SPL.  It would be better if you also illustra... See more...
I think streamstats in the title throw volunteers off because it is hard to see how it relates to your requirement, which you describe quite well without SPL.  It would be better if you also illustrate input and desired output. Here is one way to do what you ask:   | index = foo sourcetype = bar earliest=-2h latest=now | addinfo | stats earliest(state) as two_hours_ago latest(state) as now by pv_number info_min_time info_max_time | where two_hours_ago == now | eval info_min_time = strftime(info_min_time, "%F %T"), info_max_time = strftime(info_max_time, "%F %T")   Emulated output without the where filter looks like pv_number info_min_time info_max_time two_hours_ago now ApplicationUpdateThread 2024-10-03 22:44:19 2024-10-04 00:44:19 22 22 ExecProcessor 2024-10-03 22:44:19 2024-10-04 00:44:19 44 44 HTTPDispatch 2024-10-03 22:44:19 2024-10-04 00:44:19 28 29 SavedSearchFetcher 2024-10-03 22:44:19 2024-10-04 00:44:19 27 27 TcpChannelThread 2024-10-03 22:44:19 2024-10-04 00:44:19 21 33 TelemetryMetricBuffer 2024-10-03 22:44:19 2024-10-04 00:44:19 31 33 indexerPipe 2024-10-03 22:44:19 2024-10-04 00:44:19 0 0 tailreader0 2024-10-03 22:44:19 2024-10-04 00:44:19 44 44 webui 2024-10-03 22:44:19 2024-10-04 00:44:19 28 29 With filter, the output is pv_number info_min_time info_max_time two_hours_ago now ApplicationUpdateThread 2024-10-03 22:42:19 2024-10-04 00:42:19 22 22 ExecProcessor 2024-10-03 22:42:19 2024-10-04 00:42:19 42 42 SavedSearchFetcher 2024-10-03 22:42:19 2024-10-04 00:42:19 27 27 indexerPipe 2024-10-03 22:42:19 2024-10-04 00:42:19 0 0 tailreader0 2024-10-03 22:42:19 2024-10-04 00:42:19 42 42 Is this something you are looking for? The emulation I use to produce mock data is   index = _internal earliest=-2h latest=now | rename thread_name as "pv_number", date_minute as state ``` data emulation above ```    
I have a lookup table that we update on daily basis with two fields that are relevant here, NAME and ID.  NAME ID Toronto 765 Toronto 1157 Toronto 36   I need to pull data from ... See more...
I have a lookup table that we update on daily basis with two fields that are relevant here, NAME and ID.  NAME ID Toronto 765 Toronto 1157 Toronto 36   I need to pull data from an index and filter for these three IDs. Normally I would just do  <base search> | lookup lookup_table ID OUTPUT NAME | where NAME = "Toronto" This works, but the search takes forever since the base search is pulling records from everywhere, and filtering afterward.  I'm wondering if it's possible to do something like this (psuedo code search incoming) index=<index> ID IN ( |[inputlookup lookup_table where NAME = "Toronto"]) Basically, I'm trying to save time by not pulling all the records at the beginning and instead filter on a dynamic value that I have to grab from a lookup table. 
I am testing out the Splunk Operator Helm chart to deploy a C3 architecture Splunk instance. At the moment everything deploys without errors, My cluster manager will pull and install apps via the App... See more...
I am testing out the Splunk Operator Helm chart to deploy a C3 architecture Splunk instance. At the moment everything deploys without errors, My cluster manager will pull and install apps via the AppFramework config, and SmartStore is receiving data from the indexer cluster. However, after creating ingress objects for each Splunk instance in the deployment (LM, CM, MC, SHC, IDXC) I have been able to successfully log into every WebGUI except for the indexer cluster. The behavior I am experience is basically like getting kicked out of the GUI the second I type the username and password then hit enter. The web page refreshes and I am back at the log in screen. I double checked that the Kubernetes secret containing the admin password is the same for all of the Splunk instances, and also intentionally typed in a bad password and got a login failed message instead of the screen refresh I get when entering the correct password. I am not really sure how to go about troubleshooting this. I searched through the _internal index but didn't come up with a smoking gun.
not sure i can do that using a base query and then a chained query panel a gives me MA line | timechart count span=1d | streamstats time_window=30d avg(count) as A | eval A=round(A,0)   Panel B ... See more...
not sure i can do that using a base query and then a chained query panel a gives me MA line | timechart count span=1d | streamstats time_window=30d avg(count) as A | eval A=round(A,0)   Panel B gives me count by day bar | timechart span=1d count(B) by B
Had to step away from this for a while due to more pressing fires. Finally got to look at it today, borrowed a nifty query from dbinspect query help - Splunk Community and found out that Splunk think... See more...
Had to step away from this for a while due to more pressing fires. Finally got to look at it today, borrowed a nifty query from dbinspect query help - Splunk Community and found out that Splunk thinks my oldest bucket is from May 5, 2024, despite some being up to 8 years old. If I search against time, they seem to come up correctly going back years so, I'm at an utter loss on that one.  But, at least I know why it's not rolling anything off now!
Thanks for the assistance @sainag_splunk .  I didn't know about some of the btool options.  I normally do btool --debug [inputs|props|transforms] list <stanza>
The solution was  filtering what was returned.  The search went from 1139 users reporting up to 233.  The 233 didn't error.
@sainag_splunkI didn't get any results back from the searches.  This isn't surprising since the information is a csv file ingested by Splunk for reference.  We don't do any modifications of the data.
Is it possible to see the SPL queries that MC uses for those dashboards?