All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

this occurred While upgrading from the Splunk Enterprise v 8.2.8 ->9.1.0->9.2.0->9.3.0 to 9.3.4
Let me jump in and offer some side notes (you're of course free to ignore me completely as this might not be related to the immediate problem at hand). I'll leave aside for now the append command it... See more...
Let me jump in and offer some side notes (you're of course free to ignore me completely as this might not be related to the immediate problem at hand). I'll leave aside for now the append command itself but the appended search might be way more efficient. You're doing much before discarding (probably) significant portion of your data. Firstly, you could do the two spath commands after searching for area=* and dedup. This way you'll do much less json parsing, which is quite heavy. Secondly, you could just parse path=EquipmentEventReport.EquipmentEvent.ID.Location.PhysicalLocation as a whole and get three fields for the price of one run of spath. Thirdly - if you want to get all events having anything in the ...area path, you could just first limit your search to results containing "area" as the search term. It might not be 100% accurate since the word can occur somewhere else in your events but it might be a pretty good way to narrow the search. (of course it won't work if you have a field named "area" in another "branch" of your jsons in 100% of your events; but it's worth checking out). Fourthly, as I understand you have quite sizeable jsons. It's best to drop them as early as possible so you should move your fields - _raw as far up your search as possible - probably right after dedup. And finally - your data is very very tricky to work with. You have multiple multivalued fields. I understand that the assumption is that for each of those fields first values of all those fields match the same "event" or "state" or whatever, second values of those fields create another tuple and so on. The trouble is - there is no way in Splunk to make sure of it unless you are absolutely sure that your input data is always fully populated, correct and additionally, properly ingested, parsed and so on. Otherwise a single missing value here and there squashes your values together. So relying on the order of values in multiple multivalued fields is extremely tricky. Unfortunately sometimes the input data is simply very badly formatted and you don't have much choice but it might be worth to raise this issue with whoever or whatever produces the input data. And of course finally it's never wrong to point out that append - especially that your appended subsearch seems quite heavy with multiple spath commands - might get silently finalized and leave you with incomplete data. You should be able to use the datamodel search with a multisearch command.
It is highly improbable that the eventlog input mangles the events. I'd rather suspect that it's being ingested some different way. Since there is a UTF-16-encoded text there I'd suspect that apart f... See more...
It is highly improbable that the eventlog input mangles the events. I'd rather suspect that it's being ingested some different way. Since there is a UTF-16-encoded text there I'd suspect that apart from ingesting data from event log you're somehow trying to read the raw evtx file. Or you've hit some bug in the UF.
Oooompf. That's a bit ineffective way of creating mock data. I'd go with makeresults format=csv data=... But to the point. Assuming you want the first (or last - it's just a matter of proper so... See more...
Oooompf. That's a bit ineffective way of creating mock data. I'd go with makeresults format=csv data=... But to the point. Assuming you want the first (or last - it's just a matter of proper sorting) cost value for each ID daily | sort - _time ``` this way you'll get the latest value for each day because it will be the first one``` | bin _time span=1d ``` this will "group" your data by day ``` | dedup _time ID ``` and this will only leave first event for each combination of _time and ID``` You can of course sort the other way (actually the reverse chronological order is the default one; it's just included here for the solution to be as explicitly stated as possible) if you want first values daily, not last ones. And can do dedup over more fields (to get the values by code as well as date and ID, for example).
@livehybrid  Question: In your search you would struggle to achieve timechart because you dont have _time at this point? Respond: I see how can I achieve this? Question: If possible please g... See more...
@livehybrid  Question: In your search you would struggle to achieve timechart because you dont have _time at this point? Respond: I see how can I achieve this? Question: If possible please give us further info we can help with this. but it would be good if you could confirm the field which links them? Is it trace? Answer: Yes it is trace
@ITWhisperer  im getting this no results  
Hi all, I’m using the Splunk Universal Forwarder on Windows to collect event logs. My inputs.conf includes the following configurations: [WinEventLog://Security] disabled = 0 index = win_log [WinE... See more...
Hi all, I’m using the Splunk Universal Forwarder on Windows to collect event logs. My inputs.conf includes the following configurations: [WinEventLog://Security] disabled = 0 index = win_log [WinEventLog://System] disabled = 0 index = win_log [WinEventLog://Application] disabled = 0 index = win_log [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0 renderXml = true index = win_log   The first three (Security, System, and Application) work perfectly and show readable, structured logs. However, when I run: index=win_log sourcetype=*sysmon* I get logs in unreadable binary or hex format like: \x00\x00**\x00\x00 \x00\x00@ \x00\x00\x00\x00\x00\x00\xCE....  How can I fix this and get properly parsed Sysmon logs (with fields like CommandLine, ParentProcess, etc.)?
Hi @Mirza_Jaffar1  Something has failed in the startup process, please could you check your splunkd.log in $SPLUNK_HOME/var/log/splunk/splunkd.log and let us know what ERROR logs appear towards the ... See more...
Hi @Mirza_Jaffar1  Something has failed in the startup process, please could you check your splunkd.log in $SPLUNK_HOME/var/log/splunk/splunkd.log and let us know what ERROR logs appear towards the end of the file?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
why this issues I was trying to upgrade the splunk enterprise  Checking prerequisites...         Checking http port [8000]: open         Checking mgmt port [8089]: open         Checking appserver... See more...
why this issues I was trying to upgrade the splunk enterprise  Checking prerequisites...         Checking http port [8000]: open         Checking mgmt port [8089]: open         Checking appserver port [127.0.0.1:8065]: open         Checking kvstore port [8191]: open         Checking configuration... Done.         Checking critical directories...        Done         Checking indexes...                 Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main summary         Done     Bypassing local license checks since this instance is configured with a remote license master.           Checking filesystem compatibility...  Done         Checking conf files for problems...                 Invalid key in stanza [email] in /opt/splunk/etc/apps/search/local/alert_actions.conf, line 2: show_password (value: True).                 Invalid key in stanza [cloud] in /opt/splunk/etc/apps/splunk_assist/default/assist.conf, line 14: http_client_timout_seconds (value: 30).                 Invalid key in stanza [setup] in /opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf, line 16: cluster_monitor_interval (value: 300).                 Invalid key in stanza [setup] in /opt/splunk/etc/apps/splunk_secure_gateway/default/securegateway.conf, line 20: cluster_mode_enabled (value: false).                 Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'         Done         Checking default conf files for edits...         Validating installed files against hashes from '/opt/splunk/splunk-9.3.4-30e72d3fb5f7-linux-2.6-x86_64-manifest'         All installed files intact.         Done All preliminary checks passed.   Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done     Waiting for web server at https://127.0.0.1:8000 to be available.............splunkd 261927 was not running. Stopping splunk helpers...   Done. Stopped helpers. Removing stale pid file... done.     WARNING: web interface does not seem to be available!
Hi @Bedrohungsjäger  Please can I check what port configuration you have in SC4S? Have you set your port with SC4S_LISTEN_ZSCALER_LSS_TCP_PORT ? (For more info on setup please see https://splunk.git... See more...
Hi @Bedrohungsjäger  Please can I check what port configuration you have in SC4S? Have you set your port with SC4S_LISTEN_ZSCALER_LSS_TCP_PORT ? (For more info on setup please see https://splunk.github.io/splunk-connect-for-syslog/1.90.1/sources/Zscaler/ but you may have already seen this!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
SC4S and not S4cs, apologies for the typo.
Hey Folkes Ingesting ZPA logs in Splunk using the Zscaler LSS service, I believe the configuration is correct based on the documentation, however the sourcetype is coming up as sc4s fallback and t... See more...
Hey Folkes Ingesting ZPA logs in Splunk using the Zscaler LSS service, I believe the configuration is correct based on the documentation, however the sourcetype is coming up as sc4s fallback and the logs are unreadable. It's confirmed that the logs are streaming to the HF. Can anyone who've done a similar configuration setup advise? 
In ITSI, when using NEAP to trigger email alerts, Splunk ITSI automatically appends a footer text to the email body. Even though we remove the footer text via the email alert action configuration an... See more...
In ITSI, when using NEAP to trigger email alerts, Splunk ITSI automatically appends a footer text to the email body. Even though we remove the footer text via the email alert action configuration and saved successfully, it reappears when the configuration is reopened. This issue persists despite the general email settings in Splunk not containing any footer text. We also need to have the "HTML & Plain Text" on because we are running tokens and multiple links (service analyzer and viewing the episode) and if not, it will not be utilized correctly. We cannot use just the "Plain Text" due to that reason above. If anyone has any ideas or suggestions that will be much appreciated.
Hi @sarit_s6  Sorry, the text in dashboard studio can only be set at a visualization level, not specific to a column within your table.  Did this answer help you? If so, please consider: Adding... See more...
Hi @sarit_s6  Sorry, the text in dashboard studio can only be set at a visualization level, not specific to a column within your table.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @danielbb  The Splunk dashboards are rendered with ReactJS so I'm not sure if you'd have much success getting this to work with Java - The framework can be found at https://splunkui.splunk.com/Pa... See more...
Hi @danielbb  The Splunk dashboards are rendered with ReactJS so I'm not sure if you'd have much success getting this to work with Java - The framework can be found at https://splunkui.splunk.com/Packages/visualizations/ if you're interested in looking into it. One thing you might be able to do is publish your dashboard however this relies on scheduling the search so its not very realtime. Another option would be to use dashpub which has a little more control - You would need to render these within a web view component within your Java app.   Otherwise, like @richgalloway mentioned - you could use the REST API but this would only return the results - not the visuals.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
The REST API lets you run searches and retrieve the results, but cannot render visualizations.  That's normally done by the web browser so if you're not using that then Splunk visualizations are not ... See more...
The REST API lets you run searches and retrieve the results, but cannot render visualizations.  That's normally done by the web browser so if you're not using that then Splunk visualizations are not possible.  The customer would have to use the data fetched by the API in another tool to create charts.
We have case of a customer that developed a dashboard within Splunk that has 10 panels (visualizations), and he uses the dashboard from outside Splunk with a Java program that does web scraping, and ... See more...
We have case of a customer that developed a dashboard within Splunk that has 10 panels (visualizations), and he uses the dashboard from outside Splunk with a Java program that does web scraping, and then he produces the appropriate real-time reports in the code. Now, due to authentication issues, we would like to convert this to be done via the REST API. Do we have any other options? Like embed or anything else.
Ok, let me try to get some better sample data. Believe I have it here. While this is only one ID, the data has multiple IDs, and its spans multiple months.  | makeresults count=1 | eval ID="10001", ... See more...
Ok, let me try to get some better sample data. Believe I have it here. While this is only one ID, the data has multiple IDs, and its spans multiple months.  | makeresults count=1 | eval ID="10001", _time=strptime("2025-06-01 08:00:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=100.50, code="product1" | append [| makeresults | eval ID="10001", _time=strptime("2025-06-01 10:15:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=120.75, code="product2"] | append [| makeresults | eval ID="10001", _time=strptime("2025-06-01 13:30:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=140.00, code="product3"] | append [| makeresults | eval ID="10001", _time=strptime("2025-06-02 10:15:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=130.75, code="product2"] | append [| makeresults | eval ID="10001", _time=strptime("2025-06-02 13:30:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=150.00, code="product3"] | append [| makeresults | eval ID="10001", _time=strptime("2025-06-01 08:10:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=102.50, code="product1"] | append [| makeresults | eval ID="10001", _time=strptime("2025-06-01 10:15:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=125.75, code="product2"] | append [| makeresults | eval ID="10001", _time=strptime("2025-06-01 13:30:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=145.00, code="product3"] | append [| makeresults | eval ID="10001", _time=strptime("2025-06-02 10:15:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=135.75, code="product2"] | append [| makeresults | eval ID="10001", _time=strptime("2025-06-02 13:30:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-06-01", cost=155.00, code="product3"] | append [| makeresults | eval ID="10001", _time=strptime("2025-05-01 10:15:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-05-01", cost=125.75, code="product2"] | append [| makeresults | eval ID="10001", _time=strptime("2025-05-01 13:30:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-05-01", cost=145.00, code="product3"] | append [| makeresults | eval ID="10001", _time=strptime("2025-05-02 10:15:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-05-01", cost=135.75, code="product2"] | append [| makeresults | eval ID="10001", _time=strptime("2025-05-02 13:30:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-05-01", cost=155.00, code="product3"] | append [| makeresults | eval ID="10001", _time=strptime("2025-05-01 10:15:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-05-01", cost=120.75, code="product2"] | append [| makeresults | eval ID="10001", _time=strptime("2025-05-02 13:30:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-05-01", cost=140.00, code="product3"] | append [| makeresults | eval ID="10001", _time=strptime("2025-05-02 10:15:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-05-01", cost=130.75, code="product2"] | append [| makeresults | eval ID="10001", _time=strptime("2025-05-02 13:30:00", "%Y-%m-%d %H:%M:%S"), billing_date="2025-05-01", cost=150.00, code="product3"]
Hello In Splunk dashboard studio, is it possible to change the font size of specific column ?   thanks
Hi @schose  Were you able to resolve the issue of the navigation menu labels updating? I'm running into the same issue.