All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good Day @livehybrid  Yes, It helped. Some research with Browser Dev Tools shows that all posibilities (login to splunk base, downloading, login to splunk) are inside the main domain: *.splunk.com... See more...
Good Day @livehybrid  Yes, It helped. Some research with Browser Dev Tools shows that all posibilities (login to splunk base, downloading, login to splunk) are inside the main domain: *.splunk.com So allowing by domain to splunk.com should be ok.   Kind Regards.
Sourcetype is "cisco:sfw:estreamer" and i am using with default app settings .
Hi @yssplunker  Please could you confirm the sourcetype of your data? Looking in the app, most of the sourcetypes have TRUNCATE=0 which means they shouldnt be truncated, although not all of them! P... See more...
Hi @yssplunker  Please could you confirm the sourcetype of your data? Looking in the app, most of the sourcetypes have TRUNCATE=0 which means they shouldnt be truncated, although not all of them! Please let me know which sourcetype you are having with and I'll check that specifically.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sudha_krish  httpout sends Splunk2Splunk (S2S) data but over HTTP (HEC) rather than typical S2S port 9997, is this what you are trying to achieve?  It is intended that this is used when you are... See more...
Hi @sudha_krish  httpout sends Splunk2Splunk (S2S) data but over HTTP (HEC) rather than typical S2S port 9997, is this what you are trying to achieve?  It is intended that this is used when you are not able to send data to a remote Splunk instance using typical S2S.  As @gcusello has said, if you want to send to a non-Splunk system you should look into using syslog output which will send the raw data rather than Splunk-parsed S2S data.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dipali  Im unable to download the app to check, but it sounds like there could be knowledge objects within the app which are not readable by the User role due to their RBAC/Metadata configuratio... See more...
Hi @dipali  Im unable to download the app to check, but it sounds like there could be knowledge objects within the app which are not readable by the User role due to their RBAC/Metadata configuration. Please check within the metadata/default.meta (and local.meta if you have made changes) to see what the different permissions are - feel free to share the contents here so we can walk through it.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If you want for value to contain only those two values, you could modify @bowesmana 's solution like so | makeresults | fields - _time | eval value=split("ABC","") | where mvcount(value)=2 | search ... See more...
If you want for value to contain only those two values, you could modify @bowesmana 's solution like so | makeresults | fields - _time | eval value=split("ABC","") | where mvcount(value)=2 | search value=A AND value=C
Hi @livehybrid, Thanks for your response. Yes it is JSON structured data but there is not data like data -> vulnerability -> severity. How can i send you root cause analysis data? sample data :... See more...
Hi @livehybrid, Thanks for your response. Yes it is JSON structured data but there is not data like data -> vulnerability -> severity. How can i send you root cause analysis data? sample data : {"timestamp":"2025-04-29T12:44:53.812+0600","rule":{"level":5,"description":"Systemd: Service exited due to a failure.","id":"40704","firedtimes":4,"mail":false,"groups":["local","systemd"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"001","name":"debian-pc","ip":"192.168.11.XX"},"manager":{"name":"ubuntu"},"id":"1745909093.11585380","full_log":"Apr 29 06:44:53 proxmox systemd[1]: logstash.service: Main process exited, code=exited, status=1/FAILURE","predecoder":{"program_name":"systemd","timestamp":"Apr 29 06:44:53","hostname":"proxmox"},"decoder":{"name":"systemd"},"location":"journald"}
Yes, you should edit your Entity Search by implementing a new Info field like "location" which is filled ie by rex.
Hi @sudha_krish , I'm not sure that's possible to forward logs to a third party using http, the usual way is syslog as described at https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Forward... See more...
Hi @sudha_krish , I'm not sure that's possible to forward logs to a third party using http, the usual way is syslog as described at https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Forwarding/Forwarddatatothird-partysystemsd Anyway, http requires to use a token, did you created a token in the receiver? did you enabled it? did you passed it ot your output' Ciao. Giuseppe
I want to forward the logs to third party server from heavy forwarder over http. Here is my outputs.conf [httpout] defaultGroup = otel_hec_group [httpout:otel_hec_group] #server = thirdparty... See more...
I want to forward the logs to third party server from heavy forwarder over http. Here is my outputs.conf [httpout] defaultGroup = otel_hec_group [httpout:otel_hec_group] #server = thirdparty_server:8443 uri = http://thirdparty_server:8443 useSSL = false sourcetype = hf_to_otel disabled = false sslVerifyServerCert = false headers = {"Host": "hf_server", "Content-Type": "application/json"} timeout = 30 but i don't receive logs in third party server and i don't find any error in splunkd logs aswell. @SplunkSE 
I think you have your answer in other posts, but this is a good indication of asking the right question - including the "by day" also is an important point
Users with an Admin or Power role are able to view the Seclytics dashboard provided by the "Seclytics for Splunk App". However, when users with the "User" role attempt to access the same dashboard, t... See more...
Users with an Admin or Power role are able to view the Seclytics dashboard provided by the "Seclytics for Splunk App". However, when users with the "User" role attempt to access the same dashboard, the content does not display. Additionally, we discovered that the lookup file "event_by_days.csv" is missing from the expected directory: /opt/splunk/etc/apps/seclytics-splunk-app/lookups/. We would like to understand the following: Why is the dashboard visible to Admin/Power roles but not to the User role? Are there specific role-based permissions required to access this dashboard? Or is there a configuration change needed on our end to ensure all roles can access the content correctly? Seclytics for Splunk App 
Hello, aamer, could you please share your experience? Our logs are being parsed wrong atm, could you lend me a hand?
You didn't answer @bowesmana 's question about whether your sample is from an index or a lookup table.  I will assume that they come from events.  In this case, it is unnecessary to extract _time inl... See more...
You didn't answer @bowesmana 's question about whether your sample is from an index or a lookup table.  I will assume that they come from events.  In this case, it is unnecessary to extract _time inline.  You can use latest as @bowesmana and @ITWhisperer suggested, or you can simply use dedup to get the latest events before further processing: | eval day = strftime(_time, "%F") | dedup day Name Given this dataset Name Status _raw _time ABC F ABC,F, 04/25/2025 15:50:00 2025-04-25 15:50:00 ABC R ABC,R, 04/25/2025 15:25:00 2025-04-25 15:25:00 ABC F ABC,F, 04/24/2025 15:30:03 2025-04-24 15:30:03 ABC R ABC,R, 04/24/2025 15:15:01 2025-04-24 15:15:01 The above will give you Name Status _raw _time day ABC F ABC,F, 04/25/2025 15:50:00 2025-04-25 15:50:00 2025-04-25 ABC F ABC,F, 04/24/2025 15:30:03 2025-04-24 15:30:03 2025-04-24 Here is a full emulation of your mock data | makeresults | eval _raw="Name,Status,Datestamp ABC,F, 04/24/2025 15:30:03 ABC,R, 04/24/2025 15:15:01 ABC,F, 04/25/2025 15:50:00 ABC,R, 04/25/2025 15:25:00" | multikv forceheader=1 | eval _time = strptime(Datestamp, "%m/%d/%Y %T") | fields - Datestamp linecount | sort - _time ``` data emulation above ```
| makeresults format=csv data="Name,Status,Timestamp ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15:25:00" | eval _time = strptime(Timestamp,... See more...
| makeresults format=csv data="Name,Status,Timestamp ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15:25:00" | eval _time = strptime(Timestamp, "%m/%d/%Y %T") | bin _time as _day span=1d | stats latest(*) as * by _day Name
Hi @bsreeram  If you want it splitting by Name and day so you get the latest per Name AND day then you can use a timechart | timechart span=1d latest(*) as * Did this answer help you? If so... See more...
Hi @bsreeram  If you want it splitting by Name and day so you get the latest per Name AND day then you can use a timechart | timechart span=1d latest(*) as * Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
It worked for certain cases but please see the following  For the following data records, ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15... See more...
It worked for certain cases but please see the following  For the following data records, ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15:25:00   The solution should be as follows - i.e. latest status by day has to be captured.  ABC,F, 04/24/2025 15:30:03 ABC, F, 04/25/2025 15:50:00
@msatish- Yes you can always define your own sourcetype & your own custom index that you want any data to fall into.   But as @livehybrid is asking you can need to figure-out how you are collecting... See more...
@msatish- Yes you can always define your own sourcetype & your own custom index that you want any data to fall into.   But as @livehybrid is asking you can need to figure-out how you are collecting the data & which format of the logs so you can figure-out from which config file & where you can apply the new sourcetype & index. And you also need to put props.conf configuration (Parsing, Timestamp extraction, Field Extraction, etc.) for your custom sourcetype.   And make sure index is created on your indexers before you start pushing the data into your custom index.   I hope this helps!!!
It worked for certain cases but please see the following  For the following data records, ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15... See more...
It worked for certain cases but please see the following  For the following data records, ABC,F, 04/24/2025 15:30:03 ABC, R, 04/24/2025 15:15:01 ABC, F, 04/25/2025 15:50:00 ABC, R, 04/25/2025 15:25:00   The solution should be as follows - i.e. latest status by day has to be captured.  ABC,F, 04/24/2025 15:30:03 ABC, F, 04/25/2025 15:50:00    
Hi  livehybrid, Thank you for pointing that out to me. Regarding your suggested solution. It was very helpful. I checked the mongod.log using the following command: tail -n 200 $SPLUNK_HOME/var/lo... See more...
Hi  livehybrid, Thank you for pointing that out to me. Regarding your suggested solution. It was very helpful. I checked the mongod.log using the following command: tail -n 200 $SPLUNK_HOME/var/log/splunk/mongod.log The output clearly showed the issue: 2025-03-27T10:16:32.087Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2025-03-27T10:16:32.087Z F NETWORK [main] The provided SSL certificate is expired or not yet valid. 2025-03-27T10:16:32.087Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager_openssl.cpp 1182 2025-03-27T10:16:32.087Z F - [main] ***aborting after fassert() failure It turned out that the server SSL certificate had expired. Here are the steps I took to resolve the issue: 1- Backed up the existing certificate: cp $SPLUNK_HOME/etc/auth/server.pem $SPLUNK_HOME/etc/auth/server.pem.bak 2- Generated a new self-signed certificate: splunk createssl server-cert -d $SPLUNK_HOME/etc/auth -n server (This creates a new server.pem valid for 2 years.)  3- restart splunk ./splunk restart 4- Verified KV Store status:  splunk show kvstore-status   #####Note for Search Head Cluster#### Since we’re running a SH cluster, I made sure to: Copy the new server.pem to all search head members. Restart Splunk on each node. These steps fully resolved the issue, and the KV Store is now functioning as expected.