All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Have you started your instance(s) every time after you have applied a new version? This is needed to make a needed conversions e.g. from 8.2.8 -> 9.1.0 etc.! Without those starts it’s almost same to d... See more...
Have you started your instance(s) every time after you have applied a new version? This is needed to make a needed conversions e.g. from 8.2.8 -> 9.1.0 etc.! Without those starts it’s almost same to do it directly 8.2.8 -> 9.3.4 especially if you are using tar.gz package. With rpm and deb installing a new, removing some old unneeded files too. But all conversion tasks have done only when you are starting the instance.
You could use dedup with sortby parameter, as I previously show.
Indeed lookups often end up with multivalue.  You need to make sure that every field to include have equal number of values. Usually I am in favor of JSON like in @livehybrid 's suggestion, although... See more...
Indeed lookups often end up with multivalue.  You need to make sure that every field to include have equal number of values. Usually I am in favor of JSON like in @livehybrid 's suggestion, although it should not be that complex; especially, one should not compose JSON with string.  More on JSON later.  There is an even simpler approach if you can enumerate fields to include: multikv.  No mvexpand needed.  Here is how: | eval _raw = mvappend("FunctionGroup,MsgNr,alarm_severity,area,equipment", mvzip(mvzip(mvzip(mvzip(FunctionGroup,MsgNr, ","), alarm_severity, ","), area), equipment, ",") ) | multikv forceheader=1 | fields - _raw linecount The idea is to compose a CSV table with mvzip, then extract from this table.  If composing nested mvzip is too much, or if you cannot easily enumerate fields to include, you can add foreach to your arsenal: | rename FunctionGroup as _raw | eval header = "FunctionGroup" | foreach MsgNr,alarm_severity,area,equipment [ eval _raw = mvzip(_raw, <<FIELD>>, ","), header = header . "," . "<<FIELD>>"] | eval _raw = mvappend(header, _raw) | multikv forceheader=1 | fields - _raw header linecount  Now, back to JSON - in this use case, it is more involved than multikv.  Again, with help of foreach and provided that your Splunk version is 8.1 or later, this is a semantic way to do it: | eval jcombo = json_object() | eval idx = mvrange(0, mvcount(FunctionGroup)) | foreach FunctionGroup MsgNr alarm_severity area equipment [ eval jcombo = json_set(jcombo, "<<FIELD>>", mvindex(<<FIELD>>, idx))] | fields - FunctionGroup MsgNr alarm_severity area equipment | mvexpand jcombo | fields - idx jcombo Of course, you can also do this without foreach.
Is this the result you are looking for? ID billing_date code latest(cost) _time 10001 2025-05-01 product2 135.75 2025-05-02 10:15:00 10001 2025-05-01 product3 155.00 2025-05-02 ... See more...
Is this the result you are looking for? ID billing_date code latest(cost) _time 10001 2025-05-01 product2 135.75 2025-05-02 10:15:00 10001 2025-05-01 product3 155.00 2025-05-02 13:30:00 10001 2025-06-01 product1 102.50 2025-06-01 08:10:00 10001 2025-06-01 product2 130.75 2025-06-02 10:15:00 10001 2025-06-01 product3 150.00 2025-06-02 13:30:00 dedup with perfect sort as @PickleRick suggests should work.  Another way is to simply use stats as I originally suggested: | stats latest(cost) max(_time) as _time by ID billing_date code  
Hello @new ,  Can you try to directly run search with log file name or an keyword around logs of that custom add-on on Splunk Cloud and check how it goes?
Hi @bakeery  Please can you confirm which UF version you are running on? There is a known issue (SPL-217199) in < 9.0.1 relating to WinEventLog sourcetype having encoded broken fields appended and I... See more...
Hi @bakeery  Please can you confirm which UF version you are running on? There is a known issue (SPL-217199) in < 9.0.1 relating to WinEventLog sourcetype having encoded broken fields appended and I'm wondering if this could be related?  See https://splunk.my.site.com/customer/s/article/Special-characters-in-sourcetype-for-windows-data-in-UF for more info. If you are on <9.0.1 I would recommend upgrading to see if this resolves the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Mirza_Jaffar1  What was the previous version and current version you are on now? Did you get a clean start when starting after upgrading to the previous version from the version before it?  D... See more...
Hi @Mirza_Jaffar1  What was the previous version and current version you are on now? Did you get a clean start when starting after upgrading to the previous version from the version before it?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Cheng2Ready  Without the original data its a little hard to say, but you could try a timechart instead of stats: (index=xxx source type=xxx) OR (index=summary_index) | timechart span=1d values... See more...
Hi @Cheng2Ready  Without the original data its a little hard to say, but you could try a timechart instead of stats: (index=xxx source type=xxx) OR (index=summary_index) | timechart span=1d values(index) as sources by trace | where mvcount(sources) > 1 Update the span=1d according to your needs.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
this occurred While upgrading from the Splunk Enterprise v 8.2.8 ->9.1.0->9.2.0->9.3.0 to 9.3.4
Let me jump in and offer some side notes (you're of course free to ignore me completely as this might not be related to the immediate problem at hand). I'll leave aside for now the append command it... See more...
Let me jump in and offer some side notes (you're of course free to ignore me completely as this might not be related to the immediate problem at hand). I'll leave aside for now the append command itself but the appended search might be way more efficient. You're doing much before discarding (probably) significant portion of your data. Firstly, you could do the two spath commands after searching for area=* and dedup. This way you'll do much less json parsing, which is quite heavy. Secondly, you could just parse path=EquipmentEventReport.EquipmentEvent.ID.Location.PhysicalLocation as a whole and get three fields for the price of one run of spath. Thirdly - if you want to get all events having anything in the ...area path, you could just first limit your search to results containing "area" as the search term. It might not be 100% accurate since the word can occur somewhere else in your events but it might be a pretty good way to narrow the search. (of course it won't work if you have a field named "area" in another "branch" of your jsons in 100% of your events; but it's worth checking out). Fourthly, as I understand you have quite sizeable jsons. It's best to drop them as early as possible so you should move your fields - _raw as far up your search as possible - probably right after dedup. And finally - your data is very very tricky to work with. You have multiple multivalued fields. I understand that the assumption is that for each of those fields first values of all those fields match the same "event" or "state" or whatever, second values of those fields create another tuple and so on. The trouble is - there is no way in Splunk to make sure of it unless you are absolutely sure that your input data is always fully populated, correct and additionally, properly ingested, parsed and so on. Otherwise a single missing value here and there squashes your values together. So relying on the order of values in multiple multivalued fields is extremely tricky. Unfortunately sometimes the input data is simply very badly formatted and you don't have much choice but it might be worth to raise this issue with whoever or whatever produces the input data. And of course finally it's never wrong to point out that append - especially that your appended subsearch seems quite heavy with multiple spath commands - might get silently finalized and leave you with incomplete data. You should be able to use the datamodel search with a multisearch command.
It is highly improbable that the eventlog input mangles the events. I'd rather suspect that it's being ingested some different way. Since there is a UTF-16-encoded text there I'd suspect that apart f... See more...
It is highly improbable that the eventlog input mangles the events. I'd rather suspect that it's being ingested some different way. Since there is a UTF-16-encoded text there I'd suspect that apart from ingesting data from event log you're somehow trying to read the raw evtx file. Or you've hit some bug in the UF.
Oooompf. That's a bit ineffective way of creating mock data. I'd go with makeresults format=csv data=... But to the point. Assuming you want the first (or last - it's just a matter of proper so... See more...
Oooompf. That's a bit ineffective way of creating mock data. I'd go with makeresults format=csv data=... But to the point. Assuming you want the first (or last - it's just a matter of proper sorting) cost value for each ID daily | sort - _time ``` this way you'll get the latest value for each day because it will be the first one``` | bin _time span=1d ``` this will "group" your data by day ``` | dedup _time ID ``` and this will only leave first event for each combination of _time and ID``` You can of course sort the other way (actually the reverse chronological order is the default one; it's just included here for the solution to be as explicitly stated as possible) if you want first values daily, not last ones. And can do dedup over more fields (to get the values by code as well as date and ID, for example).
@livehybrid  Question: In your search you would struggle to achieve timechart because you dont have _time at this point? Respond: I see how can I achieve this? Question: If possible please g... See more...
@livehybrid  Question: In your search you would struggle to achieve timechart because you dont have _time at this point? Respond: I see how can I achieve this? Question: If possible please give us further info we can help with this. but it would be good if you could confirm the field which links them? Is it trace? Answer: Yes it is trace
@ITWhisperer  im getting this no results  
Hi all, I’m using the Splunk Universal Forwarder on Windows to collect event logs. My inputs.conf includes the following configurations: [WinEventLog://Security] disabled = 0 index = win_log [WinE... See more...
Hi all, I’m using the Splunk Universal Forwarder on Windows to collect event logs. My inputs.conf includes the following configurations: [WinEventLog://Security] disabled = 0 index = win_log [WinEventLog://System] disabled = 0 index = win_log [WinEventLog://Application] disabled = 0 index = win_log [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0 renderXml = true index = win_log   The first three (Security, System, and Application) work perfectly and show readable, structured logs. However, when I run: index=win_log sourcetype=*sysmon* I get logs in unreadable binary or hex format like: \x00\x00**\x00\x00 \x00\x00@ \x00\x00\x00\x00\x00\x00\xCE....  How can I fix this and get properly parsed Sysmon logs (with fields like CommandLine, ParentProcess, etc.)?
Hi @Mirza_Jaffar1  Something has failed in the startup process, please could you check your splunkd.log in $SPLUNK_HOME/var/log/splunk/splunkd.log and let us know what ERROR logs appear towards the ... See more...
Hi @Mirza_Jaffar1  Something has failed in the startup process, please could you check your splunkd.log in $SPLUNK_HOME/var/log/splunk/splunkd.log and let us know what ERROR logs appear towards the end of the file?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
why this issues I was trying to upgrade the splunk enterprise  Checking prerequisites...         Checking http port [8000]: open         Checking mgmt port [8089]: open         Checking appserver... See more...
why this issues I was trying to upgrade the splunk enterprise  Checking prerequisites...         Checking http port [8000]: open         Checking mgmt port [8089]: open         Checking appserver port [127.0.0.1:8065]: open         Checking kvstore port [8191]: open         Checking configuration... Done.         Checking critical directories...        Done         Checking indexes...                 Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main summary         Done     Bypassing local license checks since this instance is configured with a remote license master.           Checking filesystem compatibility...  Done         Checking conf files for problems...                 Invalid key in stanza [email] in /opt/splunk/etc/apps/search/local/alert_actions.conf, line 2: show_password (value: True).                 Invalid key in stanza [cloud] in /opt/splunk/etc/apps/splunk_assist/default/assist.conf, line 14: http_client_timout_seconds (value: 30).                 Invalid key in stanza [setup] in /opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf, line 16: cluster_monitor_interval (value: 300).                 Invalid key in stanza [setup] in /opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf, line 20: cluster_mode_enabled (value: false).                 Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'         Done         Checking default conf files for edits...         Validating installed files against hashes from '/opt/splunk/splunk-9.3.4-30e72d3fb5f7-linux-2.6-x86_64-manifest'         All installed files intact.         Done All preliminary checks passed.   Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done     Waiting for web server at https://127.0.0.1:8000 to be available.............splunkd 261927 was not running. Stopping splunk helpers...   Done. Stopped helpers. Removing stale pid file... done.     WARNING: web interface does not seem to be available!
Hi @Bedrohungsjäger  Please can I check what port configuration you have in SC4S? Have you set your port with SC4S_LISTEN_ZSCALER_LSS_TCP_PORT ? (For more info on setup please see https://splunk.git... See more...
Hi @Bedrohungsjäger  Please can I check what port configuration you have in SC4S? Have you set your port with SC4S_LISTEN_ZSCALER_LSS_TCP_PORT ? (For more info on setup please see https://splunk.github.io/splunk-connect-for-syslog/1.90.1/sources/Zscaler/ but you may have already seen this!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
SC4S and not S4cs, apologies for the typo.
Hey Folkes Ingesting ZPA logs in Splunk using the Zscaler LSS service, I believe the configuration is correct based on the documentation, however the sourcetype is coming up as sc4s fallback and t... See more...
Hey Folkes Ingesting ZPA logs in Splunk using the Zscaler LSS service, I believe the configuration is correct based on the documentation, however the sourcetype is coming up as sc4s fallback and the logs are unreadable. It's confirmed that the logs are streaming to the HF. Can anyone who've done a similar configuration setup advise?