All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

well it seems map command does work in my environment there is no relation between the two queries, to be more specific i have a full query that returns everything i need in named columns.  i then ... See more...
well it seems map command does work in my environment there is no relation between the two queries, to be more specific i have a full query that returns everything i need in named columns.  i then want to use one of the fields from this query in the search paramaters for a second query and return the result as an additional column: Query 1 index=indexA source=/dir1/dir2/*/*/file.txt |rex field=source "\/dir1\/dir2\/(?<variableA>.+?(?=\/))\/(?<variableB>.+?(?=\/)).*" |table variableA, variableB this will give me 1000 events Query 2 index=indexA source=/dir1/dir2/$variableA$/$variableB$/file2.txt |rex field=_raw "(?<variableC>.+?(?=\/))*" this will give me one event i then want my table to be variableA, variableB, variableC where variableC is the same for each of the 1000 events returned from Query 1    
Probably something like this for i in $(find /opt/splunk/etc -type f \( -name savedsearches.conf -o -name "*.xml" \) -print0 | xargs -0 egrep -l "<your old index>"|egrep -v \.old); do echo "file:" $... See more...
Probably something like this for i in $(find /opt/splunk/etc -type f \( -name savedsearches.conf -o -name "*.xml" \) -print0 | xargs -0 egrep -l "<your old index>"|egrep -v \.old); do echo "file:" $i; sed -e 's/<your old index>/<your new index>/g' -I.backup $i ;done Check sed's parameters and also test this first!!!! You will run this by your own responsibility without any guarantees! 
Hi those error messages means that you haven't enough space on indexers as you already know and which you try to fix. Probably you have even so less free space that CM cannot push those new bundles ... See more...
Hi those error messages means that you haven't enough space on indexers as you already know and which you try to fix. Probably you have even so less free space that CM cannot push those new bundles into search peers?  You must log into those nodes or use other tools which can check the disk space situation on all those nodes. It's quite possible that you must manually delete/move some stuff away from those disk partitions to apply a new cluster bundle. But it's hard to say before we know the real situation on those search peers. btw. have you also try to apply that cluster bundle on GUI or just validate it? r. Ismo
What you are meaning with "We fail again and again"? What kind of environment you have? Distributed, separate HEC nodes with LB? Basically you could create e.g. dashboard where you are looking stat... See more...
What you are meaning with "We fail again and again"? What kind of environment you have? Distributed, separate HEC nodes with LB? Basically you could create e.g. dashboard where you are looking status information from _internal & _introspection logs. You could also create alerts based on your normal and abnormal behaviour after that. r. Ismo
You should look this https://docs.splunk.com/Documentation/SVA/current/Architectures/TopologyGuidance It contains preferred Splunk architecture layouts. You should remember that if you have lot of... See more...
You should look this https://docs.splunk.com/Documentation/SVA/current/Architectures/TopologyGuidance It contains preferred Splunk architecture layouts. You should remember that if you have lot of HEC inputs and you need to update/add those regularly this impacts your indexers it you are using those instead of HFs as HEC inputs. For that reason I personally prefer to use couple of HFs with LB as a HEC cluster instead of configure those directly into indexers. Here is some instructions how to tune HEC. - https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-HEC-perf-tuning-configs/m-p/601629 - https://community.splunk.com/t5/Getting-Data-In/Can-we-have-fewer-Heavy-Forwarders-than-Indexers/m-p/551485
You could check if there are additional ACL sets for that directory and especially for those files. Just make sudo to root (if possible) and then use getfacl command to look those https://www.compute... See more...
You could check if there are additional ACL sets for that directory and especially for those files. Just make sudo to root (if possible) and then use getfacl command to look those https://www.computerhope.com/unix/ugetfacl.htm How those file collections are defined in your inputs.conf? I think that with additional ACLs it's possible to define those so that you could read those files directly from that directory event you cannot cd into it.
Here is one old example which probably helps you to understand how to use it? <form version="1.1"> <label>Time Picker Control</label> <init> <set token="earliest">-24h</set> <set token="... See more...
Here is one old example which probably helps you to understand how to use it? <form version="1.1"> <label>Time Picker Control</label> <init> <set token="earliest">-24h</set> <set token="latest">now</set> </init> <fieldset submitButton="false"> <input type="time" token="time_range"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> <change> <eval token="earliest">if(relative_time</eval> </change> </input> </fieldset> <row> <panel> <title>Simple timechart</title> <chart> <title>$ranges$</title> <search> <query>index=_audit | timechart span=1h count </query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> </chart> </panel> <panel> <title>Calculation panel that limits historical range</title> <table> <search> <done> <set token="earliest">$result.earliest$</set> <set token="latest">$result.info_max_time$</set> <set token="ranges">$result.ranges$</set> </done> <query>| makeresults | addinfo | eval min_time=now()-(30*86400) | eval earliest=if(info_min_time &lt; min_time, min_time, info_min_time) | eval initial_range="Time Picker range: ".strftime(info_min_time, "%F %T")." to ".strftime(info_max_time, "%F %T") | eval limited_range="Search range ".strftime(earliest, "%F %T")." to ".strftime(info_max_time, "%F %T") | eval ranges=mvappend(initial_range, limited_range) | table ranges earliest info_min_time info_max_time </query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> I cannot remember who has present it and when, probably here or Slack?
In your case (as you have multiline and multiple events in one json file) You should use INDEXED_EXTRACTIONS=json on your UF side. So remove it from SH side. If I got this file correctly, it contain... See more...
In your case (as you have multiline and multiple events in one json file) You should use INDEXED_EXTRACTIONS=json on your UF side. So remove it from SH side. If I got this file correctly, it contains 25 events? Unfortunately I haven't suitable environment to test this from UF -> IDX -> SH, but just leave INDEXED_EXTRACTIONS on UF's props.conf (restart it after that) and remove it from SH (and IDX side if you have it also there). Then it should works. Usually props.conf should/must be on indexer or first full splunk enterprise instance from UF to IDX path. Also you could/should put it into SH when there is some runtime definitions which are needed there.  There is only some definitions which must be on UF side. This https://www.aplura.com/assets/pdf/where_to_put_props.pdf describes when and where you should put it when you are ingesting events. You could found more instructions at least lantern and docs.splunk.com. BTW why you are using jq to pretty print that json file? This add lot of additional spaces, new line characters and other unnecessary stuff on your input file. Those characters just increase your license consumption!
I'm not sure how to interpret your question.   Do you mean $t_time.latest$ comes from an input selector?( @isoutamo's link shows how to retrieve the value after a search is complete.)  For this, one ... See more...
I'm not sure how to interpret your question.   Do you mean $t_time.latest$ comes from an input selector?( @isoutamo's link shows how to retrieve the value after a search is complete.)  For this, one way to handle it is to test its value before format. index="abc" search_name="def" [| makeresults | eval earliest=relative_time($t_time.latest$,"-1d@d") | eval latest=if("t_time.latest$" == "now", now(), relative_time($t_time.latest$,"@d")) | fields earliest latest | format] | table _time zbpIdentifier
Glad it works out.  JSON allows for semantic expression.  The more traditional "Splunk" trick is to use string concatenation then split after stats.  tojson command is present in all Splunk versions;... See more...
Glad it works out.  JSON allows for semantic expression.  The more traditional "Splunk" trick is to use string concatenation then split after stats.  tojson command is present in all Splunk versions; in this case, it is also very concise. If you remove the rest of search after that chart, you'll see something like this: _raw false true {"lastLogin":"2024-12-12T23:42:47","userPrincipalName":"yliu"}   28 {"lastLogin":"2024-12-13T00:58:38","userPrincipalName":"splunk-system-user"} 290 150 The intent is to construct a chart that will render the desired table layout while retaining all the data needed to produce final presentation. (This is why I ask for a mockup table so I know how you want to present data.  Presentation does influence solution.)
Unless you start splunk with all those mid versions it didn’t do those conversations etc actions which are needed before next update. Now you have done direct update from 9.0.x to 9.3.2 and this is no... See more...
Unless you start splunk with all those mid versions it didn’t do those conversations etc actions which are needed before next update. Now you have done direct update from 9.0.x to 9.3.2 and this is not supported way. Usually splunk has installed as root, but it should run as splunk (or other non root) user. Have you look what logs said especially migration.log and splunkd.log?
I see -    $ ps -ef | grep splunk splunk 2802446 2802413 0 Dec08 ? 00:00:08 [splunkd pid=2802413] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-... See more...
I see -    $ ps -ef | grep splunk splunk 2802446 2802413 0 Dec08 ? 00:00:08 [splunkd pid=2802413] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner]   Meaning, the user splunk runs on the host and when I sudo to be the splunk user, I don't have access the these logs files, even though they are being ingested.   
I love of the idea of the - HEC inputs directly on indexers (and have an LB in front of them) !
You have no _time in your output fields. See https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/UseSireportingcommands#Summary_indexing_of_data_without_timestamps
We have more than one instance of S1 configured in the SentinelOne app on our SH. We do NOT have the S1 TA installed anywhere else. We have noticed that you can only specific a single "SentinelOne Se... See more...
We have more than one instance of S1 configured in the SentinelOne app on our SH. We do NOT have the S1 TA installed anywhere else. We have noticed that you can only specific a single "SentinelOne Search Index" in the base configuration. We have more than one index because we have configured each instance we have to go to different indexes. Because of this, the only index where events are typed and tagged properly is the index we have selected in the app.  Anyone know how we can get around this and have the events in the other indexes typed and tagged correctly?
And what user does your splunkd run with?
I'm not sure what you mean by "subcluster". You can have HEC inputs directly on indexers (and have an LB in front of them) or can have a farm of HFs with HECs sending to the cluster.
Thanks this worked like a charm! 
You may just want to copy/paste a small sample of anything interesting in those logs. Can you also check /var/log/syslog or /var/log/messages for potentially interesting errors coming directly from t... See more...
You may just want to copy/paste a small sample of anything interesting in those logs. Can you also check /var/log/syslog or /var/log/messages for potentially interesting errors coming directly from the OTel collector?
The logs that I found are /var/log/apache2/access.log and error.log. I don't know how to attach all logs here.