All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

While you can go even more generic -  | foreach * [ | eval a=mvappend(a,if("<<FIELD>>"=="EventID",null(),json_object("location",location,"name","<<FIELD>>","value",'<<FIELD>>'))) ] | fields... See more...
While you can go even more generic -  | foreach * [ | eval a=mvappend(a,if("<<FIELD>>"=="EventID",null(),json_object("location",location,"name","<<FIELD>>","value",'<<FIELD>>'))) ] | fields - _raw | mvexpand a | fields a | spath input=a | fields - a (Works but throws some exception about templatized search for a field; would have to investigate it deeper). But it won't do in context of the datamodel. Datamodel constraints must be a single non-piped search.
Unfortunately there does not seem to be a global setting to disable dashboard refreshing. You may need to optimize each dashboard to reduce their search redundancy or use xml settings to disable refr... See more...
Unfortunately there does not seem to be a global setting to disable dashboard refreshing. You may need to optimize each dashboard to reduce their search redundancy or use xml settings to disable refreshing on their panels.
1. We don't know neither your events nor your summary index contents. 2. There is much going on here. Try to avoid eventstats if possible. It's a "heavy" command and can run out of memory. 3. You b... See more...
1. We don't know neither your events nor your summary index contents. 2. There is much going on here. Try to avoid eventstats if possible. It's a "heavy" command and can run out of memory. 3. You bin by 5m but name your fields as if it was hourly. 4. You're generating several fields which you don't use later.
Hi @Cheng2Ready , at first don't rename hour_bucket, then don't use values in timechart command, then why are you using all these stats? at least why do you want to list all the values of count w... See more...
Hi @Cheng2Ready , at first don't rename hour_bucket, then don't use values in timechart command, then why are you using all these stats? at least why do you want to list all the values of count without the yyyId? what do you want to extract? please try: (index="xxxx" field.type="xxx") OR index=Summary_index | eventstats values(index) as sources by trace | where mvcount(sources) > 1 | spath output=yyyId path=xxxId input=_raw | where isnotnull(yyyId) ANDyyyId!="" | bin _time span=5m | stats latest(_time) AS last_activity_in_hour count BY _time yyyId | stats values(count) AS "Unique Customers per Hour" BY _time  Could you share more detals about your requirement? Ciao. Giuseppe
Understood.  When I say to use Splunk's mature/robust solution, it doesn't mean it has to happen inside Splunk.  All you need is to use the regex that Splunk has QA tested for you.  The regex in Splu... See more...
Understood.  When I say to use Splunk's mature/robust solution, it doesn't mean it has to happen inside Splunk.  All you need is to use the regex that Splunk has QA tested for you.  The regex in Splunk's transformation url is this: (?<url>[[alphas:proto]]://(?<domain>[a-zA-Z0-9\-.:]++)(?<uri>/[^\s"]*)?) Here is the same test, except I substitute  the transform with the above regex. | makeresults format=csv data="_raw http://www.google.com/search?q=what%20about%20bob https://yahoo.com:443/ ftp://localhost:23/ ssh://1234:abcd:::21/" | rex "(?<url>[[alphas:proto]]://(?<domain>[a-zA-Z0-9\-.:]++)(?<uri>/[^\s\"]*)?)" | rex field=domain "(?<domain>.+)(?::(?<port>\d+))$" | rename proto as scheme (Because rex command requires double quote, I have to escape the double quote inside the uri group.)  It gives the exact same valid results that you want: _raw domain port schema uri url http://www.google.com/search?q=what%20about%20bob www.google.com   http /search?q=what%20about%20bob http://www.google.com/search?q=what%20about%20bob https://yahoo.com:443/ yahoo.com 443 https / https://yahoo.com:443/ ftp://localhost:23/ localhost 23 ftp / ftp://localhost:23/ ssh://1234:abcd:::21/ 1234:abcd:: 21 ssh / ssh://1234:abcd:::21/
Try removing lines from th end of the search, one at a time, until the results appear, then you will know which line is causing the problem. If that doesn't work, try sharing some events from the in... See more...
Try removing lines from th end of the search, one at a time, until the results appear, then you will know which line is causing the problem. If that doesn't work, try sharing some events from the index and the summary index to show us what you are dealing with.
Hi @thierry  How about this? Ultimately I think you need to get it into a multival field so you can expand into indiviaual events: | foreach temperature humidity [ eval contents=mvappend(c... See more...
Hi @thierry  How about this? Ultimately I think you need to get it into a multival field so you can expand into indiviaual events: | foreach temperature humidity [ eval contents=mvappend(contents,json_set("{}","name","<<FIELD>>","value", <<FIELD>>)) ] |mvexpand contents | eval contents=json_set(contents,"location",location) | eval _raw=contents  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Without using a SubSearch since there is a limit of 10000 results index="xxxx" field.type="xxx" OR index=Summary_index | eventstats values(index) as sources by trace | where mvcount(sources) >... See more...
Without using a SubSearch since there is a limit of 10000 results index="xxxx" field.type="xxx" OR index=Summary_index | eventstats values(index) as sources by trace | where mvcount(sources) > 1 | spath output=yyyId path=xxxId input=_raw | where isnotnull(yyyId) ANDyyyId!="" | bin _time span=5m AS hour_bucket | stats latest(_time) as last_activity_in_hour, count by hour_bucket, yyyId | stats count by hour_bucket | sort hour_bucket | rename hour_bucket AS _time | timechart span=5m values(count) AS "Unique Customers per Hour" Still doesn't return any results
I have events already in an index looking like this: {    "location": "Paris",    "temperature": 25,    "humidity": 57 } I have a data model looking like this: {    "location": string    "na... See more...
I have events already in an index looking like this: {    "location": "Paris",    "temperature": 25,    "humidity": 57 } I have a data model looking like this: {    "location": string    "name": string   "value": number } and so I would need my event to show up as two events under this data model: {    "location":"Paris"    "name": "temperature"   "value": 25 } and {    "location":"Paris"    "name": "humidity"   "value": 57 }  What would be the best way to proceed? I have had no luck with field manipulation / tables so far.
That's two different things. To resolve a pass4SymmKey mismatch, update the plain-text pass4SymmKey value in server.conf and restart the Splunk instance.  Splunk will encrypt the value when it resta... See more...
That's two different things. To resolve a pass4SymmKey mismatch, update the plain-text pass4SymmKey value in server.conf and restart the Splunk instance.  Splunk will encrypt the value when it restarts.  Repeat for each SH and peer.  Do NOT copy an encrypted pass4SymmKey from another Splunk instance. Peers do not phone home so you must be referring to forwarders contacting the Deployment Server (DS).  Ensure all clients have the correct DS info in deploymentclient.conf and that the network permits connections from each client to the DS. With more information about the problem(s) we can be more specific about the solution(s).
I'll mark this down as the solution and figure out how to push the modified settings from a manager.
How do I resolve authentication or pass4SymmKey mismatch between search head and peer? Also getting a situation where 0 clients phone home.
I appreciate the response! I don't really understand how it works, but I was able to use your suggestion as a guide and came up with this. The two events have different source types so I needed the O... See more...
I appreciate the response! I don't really understand how it works, but I was able to use your suggestion as a guide and came up with this. The two events have different source types so I needed the OR. I had always thought "count" was just for summing up on fields, yet here the field values I need are in the results. So I guess in my situation "where count=1" works because the primary event will always have a match. So a count of 1 means the primary search matched and the secondary didn't, not the other way around. index=idx1 (sourcetype=source1 "Queueing create notifications for EventId:") OR (NotificationService CREATED EVENT sourcetype=source2) | rex "EventId: (?<event_id1>\d+) in client (?<client_id>\d+)" | rex "\"eventId\",\"value\":\"(?<event_id2>\d+)" | eval event_id=coalesce(event_id1,event_id2) | fields client_id, event_id | stats values(*) as * count by event_id | where count=1  
Hi all, I have dashboard created in studio, which uses multiple tabs, each tab housing different dashboards.  I wanted to see if it was possible to change the color of the font in the tab name? I t... See more...
Hi all, I have dashboard created in studio, which uses multiple tabs, each tab housing different dashboards.  I wanted to see if it was possible to change the color of the font in the tab name? I tried to add "fontColor": "#...", to various sections in the source code but nothing I do seems to be able to change the color of the tab name.  Thanks!
Without going into verbose detail, it isn't Splunk which is doing the domain extraction, hence I need to rely on regex.
Hi, I think something to consider here is the intended use-case for "try now". It's designed to be a test editing validation step--not an on-demand call to run the test once to analyze results. Sinc... See more...
Hi, I think something to consider here is the intended use-case for "try now". It's designed to be a test editing validation step--not an on-demand call to run the test once to analyze results. Since you're working in a CI/CD pipeline, I'm thinking your use-case is probably the latter. I believe we have a feature on our roadmap that will better serve your use-case. You may want to ask your account rep for a roadmap call to discuss this feature with an engineer who can help determine if the upcoming feature meets your use-case.
Hi, It's probably worth noting that message is "info" level and not actually an "error". It's mostly just annoying. That directory is managed at runtime so even if you put placeholder files in there... See more...
Hi, It's probably worth noting that message is "info" level and not actually an "error". It's mostly just annoying. That directory is managed at runtime so even if you put placeholder files in there, they would get automatically deleted. As a heads-up, there is movement away from the smartagent receivers in favor of native OTel receivers as they become available. You may want to try the OTel rabbitmq receiver instead: https://help.splunk.com/en/splunk-observability-cloud/manage-data/available-data-sources/supported-integrations-in-splunk-observability-cloud/applications-messaging/rabbitmq
Dicing deeper, I saw that the collectd.conf had an Include line at the bottom of the default config.  I wound up commenting this out. ¯\_(ツ)_/¯
We are planning to transition to the cloud, realizing that about 30% of our searches are dashboard refreshes, and in order to minimize SVCs, I wonder if there is a way to disable dashboard refreshes,... See more...
We are planning to transition to the cloud, realizing that about 30% of our searches are dashboard refreshes, and in order to minimize SVCs, I wonder if there is a way to disable dashboard refreshes, and whether the features are different in on-prem vs. cloud.
Also, if I vreate the directory /usr/lib/splunk-otel-collector/agent-bundle/run/collectd/global/managed_configs/ and stick the collectd.config in there, if I restart the service the directory is remo... See more...
Also, if I vreate the directory /usr/lib/splunk-otel-collector/agent-bundle/run/collectd/global/managed_configs/ and stick the collectd.config in there, if I restart the service the directory is removed.