All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Eshwar, You can add "output_mode=json" parameter to get json output. Please see below; curl -k -u admin:password https://localhost:8089/services/search/jobs/export -d search="search sourcetype=... See more...
Hi @Eshwar, You can add "output_mode=json" parameter to get json output. Please see below; curl -k -u admin:password https://localhost:8089/services/search/jobs/export -d search="search sourcetype=splunkd earliest=-1h" -d output_mode=json  
@scelikok Yes I know this add-on. But the hec token works because my kafka is inside kubernetes cluster.
Hi @AL3Z, You can check directly from notable index, but using notable macro is much easier. `notable` | timechart count by rule_name
Hi @uagraw01, If your need is ingesting data from Kafka to Splunk, you can check  "Splunk Connect for Kafka" https://splunkbase.splunk.com/app/3862 
I am looking forward to utilize only splunk internal logs for the same. How can I utilize splunk internal metric log of a UF to fetch CPU and memory data for the same UF?
Hi @snix, have you read this: https://docs.splunk.com/Documentation/Splunk/latest/Security/AboutsecuringyourSplunkconfigurationwithSSL ? Ciao. Giuseppe
Hi @Roy_9, I suppose it's a script from an add-on, which one? If it's a Splunk supported Add-On, you can open a case to Splunk Support. Are you sure that the issue is in the script, what does it h... See more...
Hi @Roy_9, I suppose it's a script from an add-on, which one? If it's a Splunk supported Add-On, you can open a case to Splunk Support. Are you sure that the issue is in the script, what does it happen if you disable it? Ciao. Giuseppe
Hello Splunkers!! I want to connect or configure Splunk with kafka. Our Kafka resides under kubernetes cluster. Please guide me what kind of approaches I want to follow. Because there are lot of s... See more...
Hello Splunkers!! I want to connect or configure Splunk with kafka. Our Kafka resides under kubernetes cluster. Please guide me what kind of approaches I want to follow. Because there are lot of stuffs available and its confusing for me. 
Hi @gcusello OK. Thanks for your advice. 
Hi @richgalloway , We had tried giving the output_mode parameter for Rest point but still we can see xml  response.
The thing is it does not listen at all on Linux after the mentioned version. On windows I could check and it works as defined. As it is written by default it is limited to localhost.  Anyways thanks... See more...
The thing is it does not listen at all on Linux after the mentioned version. On windows I could check and it works as defined. As it is written by default it is limited to localhost.  Anyways thanks for this info.
The report at startup indicates port 8089 is not in use by any process (it's "open" for use).  It does not mean Splunk is listening on that port (at least not yet). Version 9.0 changed the default b... See more...
The report at startup indicates port 8089 is not in use by any process (it's "open" for use).  It does not mean Splunk is listening on that port (at least not yet). Version 9.0 changed the default behavior of the UF's management port.  See the Release Notes at https://docs.splunk.com/Documentation/Splunk/9.0.8/ReleaseNotes/MeetSplunk#What.27s_New_in_9.0 https://docs.splunk.com/Documentation/Splunk/9.0.8/ReleaseNotes/MeetSplunk#What.27s_New_in_9.0
Percentage as the sum of values in each time bucket? index IN ("Index 1", "Index 2", "Index 3") | timechart count by index | addtotals | foreach * [eval <<FIELD>> = if(Total == 0, 0, <<FIELD>> /... See more...
Percentage as the sum of values in each time bucket? index IN ("Index 1", "Index 2", "Index 3") | timechart count by index | addtotals | foreach * [eval <<FIELD>> = if(Total == 0, 0, <<FIELD>> / Total * 100)] | fields - Total As @scelikok indicates, move index filter into index search is more efficient. (The above is an alternative syntax.)
Hi, I would like to know about the triggered notable events from CS without accessing the incident review dashboard, as we are experiencing a significant number of notables being triggered c... See more...
Hi, I would like to know about the triggered notable events from CS without accessing the incident review dashboard, as we are experiencing a significant number of notables being triggered consistently. How can we identify the source of noise from a specific correlation search?   Thanks in advance   
Before delving into regex details, could you explain what "badness" in the sample data that you are trying to rectify?  What are the expected results? (Also, please use code section that auto wraps.)... See more...
Before delving into regex details, could you explain what "badness" in the sample data that you are trying to rectify?  What are the expected results? (Also, please use code section that auto wraps.)  In the output of your sample code, the "good" entry is exactly unchanged from the original entry. (By the way, the alternative value in if function cannot be new.  It should be old_field.) To be clear, your sample code is not to replace non-alphanumeric characters at all, but to executes an extremely complex purpose-built matches.  If the sole goal is to replace non-alphanumeric characters globally, replace(old_field, "\W", "__non_alphanumeric__") suffices.  Here is a simple example to do this when old_field is the only field of interest. | makeresults | fields - _time | eval old_field = mvappend("{\"bundle\": \"com.servicenow.blackberry.ful\", \"name\": \"ServiceNow Agent\\u00ae - BlackBerry\", \"name_version\": \"ServiceNow Agent\\u00ae - BlackBerry-17.2.0\", \"sw_uid\": \"faa5c810a2bd2d5da418d72hd\", \"version\": \"17.2.0\", \"version_raw\": \"0000000170000000200000000\"}", "{\"bundle\": \"com.penlink.pen\", \"name\": \"PenPoint\", \"name_version\": \"PenPoint-1.0.1\", \"sw_uid\": \"cba7d3601855e050d8new0f34\", \"version\": \"1.0.1\", \"version_raw\": \"0000000010000000000000001\"}") | eval sourcetype="custom:data" ``` data emulation above ``` | mvexpand old_field | spath input=old_field | fields - old_field | foreach version * [eval <<FIELD>> = if(sourcetype == "custom:data", replace(<<FIELD>>, "\W", "__non_alphanumeric__"), <<FIELD>>)] | tojson output_field=new | stats values(new) as new The result is a two-value field {"bundle":"com__non_alphanumeric__penlink__non_alphanumeric__pen","name":"PenPoint","name_version":"PenPoint__non_alphanumeric__1__non_alphanumeric__0__non_alphanumeric__1","sourcetype":"custom__non_alphanumeric__data","sw_uid":"cba7d3601855e050d8new0f34","version":"1__non_alphanumeric__0__non_alphanumeric__1","version_raw":"0000000010000000000000001"} {"bundle":"com__non_alphanumeric__servicenow__non_alphanumeric__blackberry__non_alphanumeric__ful","name":"ServiceNow__non_alphanumeric__Agent__non_alphanumeric____non_alphanumeric____non_alphanumeric____non_alphanumeric__BlackBerry","name_version":"ServiceNow__non_alphanumeric__Agent__non_alphanumeric____non_alphanumeric____non_alphanumeric____non_alphanumeric__BlackBerry__non_alphanumeric__17__non_alphanumeric__2__non_alphanumeric__0","sourcetype":"custom__non_alphanumeric__data","sw_uid":"faa5c810a2bd2d5da418d72hd","version":"17__non_alphanumeric__2__non_alphanumeric__0","version_raw":"0000000170000000200000000"} Are you trying to replace, say "." with one alphanumeric string (e.g., "dot"), ":" with a different alphanumeric string (e.g., "colon") and so on and so forth?  If so, what are the rules? Simply put: Forget about regex at all.  Could you explain the logic between sample data and desired results?  Also, is the end goal to form a JSON field, or do you expect to extract JSON nodes into fields?
You need to first extract data beyond the "dynamic" key. (Depending on semantics, I suspect that there is some data design improvement your developers could make so downstream users don't have to do ... See more...
You need to first extract data beyond the "dynamic" key. (Depending on semantics, I suspect that there is some data design improvement your developers could make so downstream users don't have to do this goaround.)     | spath input=json_data path=data output=beyond | eval key = json_array_to_mv(json_keys(beyond)) | eval beyond = json_extract(beyond, key) ``` assuming there is only one top key ``` | spath input=beyond path=x | spath input=beyond path=y     The following is full emulation (I don't see the purpose of all the transposes)   | makeresults count=1 | eval json_data="{\"data\": {\"a\": {\"x\": {\"mock_x_field\": \"value_x\"}, \"y\": {\"mock_y_field\": \"value_y\"}}}}" | append [ makeresults count=1 | eval json_data="{\"data\": {\"b\": {\"x\": {\"mock_x_field\": \"value_x\"}, \"y\": {\"mock_y_field\": \"value_y\"}}}}" ] | append [ makeresults count=1 | eval json_data="{\"data\": {\"c\": {\"x\": {\"mock_x_field\": \"value_x\"}, \"y\": {\"mock_y_field\": \"value_y\"}}}}" ] | append [ makeresults count=1 | eval json_data="{\"data\": {\"d\": {\"x\": {\"mock_x_field\": \"value_x\"}, \"y\": {\"mock_y_field\": \"value_y\"}}}}" ] | spath input=json_data path=data output=beyond | eval key = json_array_to_mv(json_keys(beyond)) | eval beyond = json_extract(beyond, key) | spath input=beyond path=x | spath input=beyond path=y | table json_data x y beyond   json_data x y beyond {"data": {"a": {"x": {"mock_x_field": "value_x"}, "y": {"mock_y_field": "value_y"}}}} {"mock_x_field":"value_x"} {"mock_y_field":"value_y"} {"x":{"mock_x_field":"value_x"},"y":{"mock_y_field":"value_y"}} {"data": {"b": {"x": {"mock_x_field": "value_x"}, "y": {"mock_y_field": "value_y"}}}} {"mock_x_field":"value_x"} {"mock_y_field":"value_y"} {"x":{"mock_x_field":"value_x"},"y":{"mock_y_field":"value_y"}} {"data": {"c": {"x": {"mock_x_field": "value_x"}, "y": {"mock_y_field": "value_y"}}}} {"mock_x_field":"value_x"} {"mock_y_field":"value_y"} {"x":{"mock_x_field":"value_x"},"y":{"mock_y_field":"value_y"}} {"data": {"d": {"x": {"mock_x_field": "value_x"}, "y": {"mock_y_field": "value_y"}}}} {"mock_x_field":"value_x"} {"mock_y_field":"value_y"} {"x":{"mock_x_field":"value_x"},"y":{"mock_y_field":"value_y"}}
@richgalloway's solution should give the correct results and is more efficient.  But you need to clarify @danielcj's question thoroughly.  In your response, you reprinted | rename id as sessionID as ... See more...
@richgalloway's solution should give the correct results and is more efficient.  But you need to clarify @danielcj's question thoroughly.  In your response, you reprinted | rename id as sessionID as in the first part of your original post, which contradicts the second part of your original post where | rename message.id as sessionID is printed.  Does index api give id, or message.id, or both but only one of them should be used as sessionID? @richgalloway's solution should work in case 2.  If index api gives id (or if it gives both but only id should be used in match) - which the first part of your original post and your reply to @danielcj imply, the solution can easily be adapted to (index=api source=api_call) OR index=waf | fields id apiName, message.payload, src_ip, requestHost, requestPath, requestUserAgent, sessionID | eval sessionID = coalesce(sessionID, id) | stats values(*) as * by sessionID Hope this helps.
Maybe it's as easy as adding it in the where clause: | mstats latest_time(application_ready_time.value) as latest_ts WHERE index=my-metrics-index host=some-host app.name IN ("appname1", "appname2")... See more...
Maybe it's as easy as adding it in the where clause: | mstats latest_time(application_ready_time.value) as latest_ts WHERE index=my-metrics-index host=some-host app.name IN ("appname1", "appname2") BY app.name | eval past_threshold=if(now() - latest_ts >= 30, "Y", "N") | eval latest=strftime(latest_ts, "%Y-%m-%d %H:%M:%S") | table app.name latest past_threshold x Give that a try. If it doesn't work, you can just post filter it with something like | mstats latest_time(application_ready_time.value) as latest_ts where index=my-metrics-index host=some-host by app.name | search app.name IN ("appname1", "appname2") | eval past_threshold=if(now() - latest_ts >= 30, "Y", "N") | eval latest=strftime(latest_ts, "%Y-%m-%d %H:%M:%S") | table app.name latest past_threshold x The former with the app.names as part of the WHERE clause is probably preferable.  Splunk can often push these sorts of search terms down into the search itself, but I'm not sure if it does that with mstats.  Meaning, if it CAN do that, it'll perform the same as the first one (more or less).  But if it can't do that, it'll run quite a bit slower because it'll get all those stats off disk, then throw away all but the two sets you want to keep. But it would work, either way. 
Hi All, I updated Splunk Universal forwarder from 8.2.6 to 9.1.3 on a Debian host. No specific configuration basically, everything by default. I would like to use the REST capabilities which I alr... See more...
Hi All, I updated Splunk Universal forwarder from 8.2.6 to 9.1.3 on a Debian host. No specific configuration basically, everything by default. I would like to use the REST capabilities which I already used with the older version but this time the port is not listening, however startup says its listening. Checking mgmt port [8089]: open Netstat shows no 8089 as well. Host has no firewall, no bulls**t, just pure playground and as I said older version worked perfectly. What can be the problem, another bug in the software?
Have you tried to just use $info_sid$ in href? | eval application_name = "<a href=https://<hostname>:8000/en-US/app/search/security_events_dashboard?form.field2=&form.application_name=" . applicatio... See more...
Have you tried to just use $info_sid$ in href? | eval application_name = "<a href=https://<hostname>:8000/en-US/app/search/security_events_dashboard?form.field2=&form.application_name=" . application_name . ">" . application_name . "</a>" | eval email_subj="Security Events Alert", email_body="<p>Hello Everyone,</p><p>You are receiving this notification because the application has one or more security events reported in the last 24 hours..<br></p><p> Please click on the link available in the table to fetch events for specific application.</p> </p><p>To view splunk results <a href=https://<hostname>:8000/en-US/app/search/search?sid=".$info_sid$.">Click here</a></p>" If your scheduled search has already sent an alert, you can go to "Activities" menu and find the exact URL for that search.  I don't believe that Splunk accept anything except the dotted numerals SID.