All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you able to check which process is using the inputs.conf file with lsof? You may need to stop Splunk, update the file, then start Splunk again. 
Hiding those elements is a function of each dashboard, not of the navigation menu. <dashboard hideChrome="true" version="1.1"> ... </dashboard> See https://docs.splunk.com/Documentation/Splunk/9.2.... See more...
Hiding those elements is a function of each dashboard, not of the navigation menu. <dashboard hideChrome="true" version="1.1"> ... </dashboard> See https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form for the available options.
Are there any sourcetype parsing issues in the splunkd.log on the receiving indexer/forwarder? index=_internal host=<receiving indexer/forwarder> log_level!=INFO "test"
https://community.splunk.com/t5/Security/Certificate-generation-failed-Splunkd-port-communication-will/m-p/318926#M12902
Would adding "earliest=<24 hours prior to the search time window>" in the subsearch fix this?
I'm struggling to figure this one out. We have data coming in via an HEC endpoint that is JSON based, with the HEC endpoint setting sourcetype to _json.  This is splunk cloud. Minor bit of backgroun... See more...
I'm struggling to figure this one out. We have data coming in via an HEC endpoint that is JSON based, with the HEC endpoint setting sourcetype to _json.  This is splunk cloud. Minor bit of background on our data: All of the data we send to splunk has an "event" field, which is a number, that indicates a specific type of thing that happened in our system. There's one index where this data goes into with a 45d retention period. Some of this data we want to keep around longer, so we use collect to copy the data over for longer retention. We have a scheduled search that runs regularly that does an "index=ourIndex event IN (1,2,3,4,5,6) | collect index=longTerm output_format=hec" We use output_format=hec because without it the data isn't searchable: "index=longTerm event=3" never shows anything. There's a bunch of _raw, but that's it. Also, for the sake of completeness, this data is being sent by cribl. Our application normally logs CSV style data with the first 15 or so columns fixed in their meaning (everything has those common fields), the 16th column contains a description with parenthesis around a semicolon list of additional parameter/fields, where each additional CSV column has a value corresponding to that field name in that list. Sometimes that value is JSON data logged as a string. For the sake of not sending JSON data as a string in an actual JSON payload - we have cribl detect that, and expand that JSON field and construct it as a native part of the payload. So: 1,2024-03-01 00:00:00,user1,...12 other columns ...,User did something (didClick;details),1,{"where":"submit"%2c"page":"home"} gets sent to the HEC endpoint as: {"event":1,"_time":"2024-03-01 00:00:00","userID":"user1",... other stuff ..., "didClick":1,"details":{"where":"submit","page":"home"}} The data that ends up missing is always the extrapolated JSON data. Anything that seems to be part of the base JSON document always seems to be fine. Now, here's the weird part. If I run the search query that does the collect to ONLY look for a specific event and do a collect on that - things actually seem fine, data is never lost. When I introduce additional events that I want to do a collect on, some of those fields are missing for some, but not all of those events. The more events I add into the IN() clause, the more those fields go missing for those events that have extrapolated JSON in them. For each event that has missing fields, all extrapolated JSON fields are missing. When I've tried to use the _raw field, use spath on that, then pipe that to collect - that seems to work reliably, but also seems like an unnecessary hack. There are dozens of these events, so breaking them out into their own discreet searches isn't something I'm particularly keen on. Any ideas or suggestions?
Hi I have two sets of data, one is proxy logs (index=netproxy) and the other is an extract of LTE Logs which is logs every time the device joins. I'd like to cross reference the proxy logs with the... See more...
Hi I have two sets of data, one is proxy logs (index=netproxy) and the other is an extract of LTE Logs which is logs every time the device joins. I'd like to cross reference the proxy logs with the LTE data so I can extract the IMEI number but the IMEI number could exist in logs outside of the search time window. The below search works but only if the timeframe is big enough that it includes the device in the proxy logs. Is there a way I can maybe extend the earliest time for 24 hours prior to the search time window? I don't want to do "all time" on the subsearch because the IP Address allocations will change over time and then be matched against the wrong IMEI. index=netproxymobility sourcetype="zscalernss-web" | fields transactionsize responsesize requestsize urlcategory serverip ClientIP hostname appname appclass urlclass type=left ClientIP [ search index=netlte | dedup ClientIP | fields ClientIP IMEI ] thanks
How is this data being input to Splunk?  You might start by checking the splunkd.log for any parsing errors or warnings. You can also check which props settings are applied to the specific source... See more...
How is this data being input to Splunk?  You might start by checking the splunkd.log for any parsing errors or warnings. You can also check which props settings are applied to the specific sourcetype using btool on the receiving Splunk indexer/forwarder: $SPLUNK_HOME/bin/splunk cmd btool props list <sourcetype>
First, it might be better to just share the KO to global permissions so it can be seen by users of both apps, rather than to copy the KO to the other app - depending on your use-case.  In case that ... See more...
First, it might be better to just share the KO to global permissions so it can be seen by users of both apps, rather than to copy the KO to the other app - depending on your use-case.  In case that is not feasible, to copy a KO to other apps you have a few options: If there are just a few KOs to be copied, you can do this from the GUI: -Click Settings -> Searches, reports and alerts -search for your KO and click Edit -> Clone, then you can select the app to clone to from the App dropdown list. In case you need to copy KOs in bulk, it is easier to copy the config from the .conf file from one app to the other. You can also use REST to POST configs.
Hi @whitecat001, if you're speaking by GUI, you should clone the Knowledge Object and then move it. If you're speaking by CLI, it depends on the KO: dashboards can be copied, reports, alerts, field... See more...
Hi @whitecat001, if you're speaking by GUI, you should clone the Knowledge Object and then move it. If you're speaking by CLI, it depends on the KO: dashboards can be copied, reports, alerts, fields eventtypes and the other KO can be copied from the original file (e.g. savedsearches.conf or eventtypes.conf in the original app to the new one. Ciao. Giuseppe
Hi @allidoiswinboom , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Still the same, result whether its eval or fieldformat format, result is same. index=MyPCF | fields server instance cpu_percentage | eval cpu_percentage=round(cpu_percentage,2) | eval server_instan... See more...
Still the same, result whether its eval or fieldformat format, result is same. index=MyPCF | fields server instance cpu_percentage | eval cpu_percentage=round(cpu_percentage,2) | eval server_instance = 'server' + "_" + 'instance' | timechart mAX(cpu_percentage) as CPU_Percentage by server_instance usenull=true limit=0 | foreach * [| eval "<<FILED>>"=round('<<FIELD>>',2)."%"] Then changed to the following and tried, same result | foreach * [| eval "<<FILED>>"= "<<FIELD>>" ."%"] | foreach * [| eval "<<FILED>>"= '<<FIELD>>' ."%"] | foreach * [| eval "<<FILED>>"= <<FIELD>> ."%"]   _time server_1 server_2 server_3 server_4 2024-03-25T16:00:00.000-0400 5.18 3 4.62 3.18 2024-03-25T16:05:00.000-0400 5.46 3.13 3.99 2.94 2024-03-25T16:10:00.000-0400 5.55 54.16 3.93 51.89 2024-03-25T16:15:00.000-0400 4.76 4.59 4.4 2.84 2024-03-25T16:20:00.000-0400 5.54 3.84 4.55 2.95 2024-03-25T16:25:00.000-0400 4.11 3.76 3.52 3.31 2024-03-25T16:30:00.000-0400 4.36 3.92 3.58 2.91 2024-03-25T16:35:00.000-0400 3.88 3.68 3.7 4.08 2024-03-25T16:40:00.000-0400 3.89 3.32 4.33 3.32 2024-03-25T16:45:00.000-0400 4.33 27.56 3.94 39.48
Exactly that way. So you must select which one those are and based on that select SEDCMD or transforms.
More words please. What is your business case. What "security events" do you want to "forward" from Splunk. Do you want the same events ingested in Splunk and Elastic/Kafka/whatever or maybe you want... See more...
More words please. What is your business case. What "security events" do you want to "forward" from Splunk. Do you want the same events ingested in Splunk and Elastic/Kafka/whatever or maybe you want to just generate an event in case some alert is triggered in Splunk?
@gcusello  Sorry for the late reply but this helped with the creation of the sourcetype. Thank you for all your help!  
Probably append with some stats values() would do the trick similarily to join.
Dear Splunkers,    My goal is to expose only some dashboards to external customer. Created a dedicated role and user with minimal access to a single app where these dashboards are placed. However, ... See more...
Dear Splunkers,    My goal is to expose only some dashboards to external customer. Created a dedicated role and user with minimal access to a single app where these dashboards are placed. However, I'm struggling with hiding Splunk bar/navigation menu. I.e. the customer can still use "find" window to search for some reports and dashboards he is not obliged to see. Could you please lead me on how to hide it?  The navigation menu looks like below:   <nav search_view="search"> <view name="search" /> <view name="datasets" /> <view hideSplunkBar="true" /> <view hideAppBar="true" /> <view hideChrome="true" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" default='true'/> </nav>     regards, Sz
I'm not sure why you do all this magic after the lookup command. |lookup activity2 ex_ip as lb OUTPUT ex_ip as match This will find a row in your lookup table activity2 for which the ex_ip value is... See more...
I'm not sure why you do all this magic after the lookup command. |lookup activity2 ex_ip as lb OUTPUT ex_ip as match This will find a row in your lookup table activity2 for which the ex_ip value is equal to the lb value from the event. If such row is found the value from the ex_ip column (in this case it's the same column you searched by - it's a common lookup-verifying technique) is copied to the field called "match" in your result set. If there was no match the 'match' field is left empty. So if you want to find only those events that matched your lookup you simply filter to find events which have a value in this field | search match=* It's that simple. If you want to match by other field you have to specify other field(s) in your lookup.
Try and see. Good thing about Splunk search is that it's hard to break something just by searching. And yes, you can use wildcards with IN operator.
Don't replace the <<FIELD>> part on the right side of the eval in foreach with a static field name.