All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

yes fields are correct but values coming out are same as what i was doing using spath statement. is there any difference i will get if iam using props and transofrms conf
Thank you @tscroggins, We have 2 logstash servers so I took one of them and made a to conf file that sends data from elastic to splunk via hec. Only issue now is logstash is running out of heap memo... See more...
Thank you @tscroggins, We have 2 logstash servers so I took one of them and made a to conf file that sends data from elastic to splunk via hec. Only issue now is logstash is running out of heap memory due to the size of the transfers. Working on fixing the pipeline now. Thanks again for the suggestions!
The inputs are unchecked now. disabled = 0 in local/inputs.conf as well. 443/tcp is allowed in firewall.   There is still no data. Is there anything I am missing? Thank you everyone for you... See more...
The inputs are unchecked now. disabled = 0 in local/inputs.conf as well. 443/tcp is allowed in firewall.   There is still no data. Is there anything I am missing? Thank you everyone for your help! API Token Post Request: internal log:  
I dont have a resoluton here, this is the documentation and the issue is around certs but I still cant work out where im going wrong. Upgrade the KV store server version - Splunk Documentation   I... See more...
I dont have a resoluton here, this is the documentation and the issue is around certs but I still cant work out where im going wrong. Upgrade the KV store server version - Splunk Documentation   Im just going to wait for a new version where this is resolved.
@gcusello thanks for a quick response.  >>at first, are you sure that you are analyzing only the new data and not also the oldest? Yes, I have changes time picker for last 15 or 60 minutes to make ... See more...
@gcusello thanks for a quick response.  >>at first, are you sure that you are analyzing only the new data and not also the oldest? Yes, I have changes time picker for last 15 or 60 minutes to make sure it's all recent data >> At least, are you sure that you're receiving logs from the same host? Yes,  this is a very small deployment and have only one ESX server.  >>Anyway, use btool I meant try btool but ended up posting question before I try that. I will do that now.     
Hi @jkamdar , at first, are you sure that you are analyzing only the new data and not also the oldest? Anyway, use btool ( https://docs.splunk.com/Documentation/Splunk/9.4.0/Troubleshooting/Usebtoo... See more...
Hi @jkamdar , at first, are you sure that you are analyzing only the new data and not also the oldest? Anyway, use btool ( https://docs.splunk.com/Documentation/Splunk/9.4.0/Troubleshooting/Usebtooltotroubleshootconfigurations ) to debug your configurations because, probably there's another input. At least, are you sure that you're receiving logs from the same host? Ciao. Giuseppe
Hi @Zorghost , at first, there's a mistyping error: not auditrial but audittrail Then analyzing the results of your search I see seom interesting fields: _time use dest action info But I d... See more...
Hi @Zorghost , at first, there's a mistyping error: not auditrial but audittrail Then analyzing the results of your search I see seom interesting fields: _time use dest action info But I don't think that you need external help for this! Ciao. Giuseppe
I have ESX hosts sending logs to rsyslog and then being ingested in Splunk.  Originally, I configured to ingest all logs (my linux servers and ESX) into one index called linux. Later, I created new ... See more...
I have ESX hosts sending logs to rsyslog and then being ingested in Splunk.  Originally, I configured to ingest all logs (my linux servers and ESX) into one index called linux. Later, I created new index called "esx" and modified the inputs.conf on my rsyslog server to reflect in stanzas for all the esx hosts and esxvcenter (added index = esx) and restarted Splunkforwarder.  However, it looks like, I am getting data in both indexes, linux and esx.  I have checked all possible inputs.conf on my rsyslog server but can't find anywhere that directs ESX logs to "linux" index.  Any help to troubleshoot the issue would be appreciated.     
Hi @gcusello and thanks again for your reply ! What I want is a query that I can use to fetch only the important fields from the _audit index to get visibility on the admin activity events. What I c... See more...
Hi @gcusello and thanks again for your reply ! What I want is a query that I can use to fetch only the important fields from the _audit index to get visibility on the admin activity events. What I currently have is : index=_audit sourcetype="audittrial" action=edit* OR action=create* OR action=delete* OR action=restart* I want to get the least possible amount of data volume while getting the needed information to construct the audit events.
When the index becomes full, indexing will stop but DB Connect will continue to run.  Don't let the index get full.
Hi @Zorghost , let me understand: you need to access _audit index but you aren't anabled to it and you would have a copy of these logs accessible for you, is it correct? If this is your requirement... See more...
Hi @Zorghost , let me understand: you need to access _audit index but you aren't anabled to it and you would have a copy of these logs accessible for you, is it correct? If this is your requirement, the easiest way is obviously to be enabled to access _audit index! Otherwise, you could schedule a search (having the administrative grants) that copies the _audit index in a summary index, so you can access it in Splunk. Ciao. Giuseppe
Looking through the 2nd article that you suggested, it was noticed that the outputs.conf had  autoLBFrequency = 15 forceTimebasedAutoLB = true Removed forceTimebasedAutoLB = true and the message... See more...
Looking through the 2nd article that you suggested, it was noticed that the outputs.conf had  autoLBFrequency = 15 forceTimebasedAutoLB = true Removed forceTimebasedAutoLB = true and the message stopped after the U/F restarted. It appears that the 2 entries were conflicting with each other. Thank you for the guidance!
You probably do need to use app_name_choice in your panels if you want the All option to be converted to * otherwise your search will be for app_name="All" which is probably not what you want!
Sounds like a good plan B to me Will try to go with that instead. KR
Yeah but the question is why you don't see the fields when spawning the search from the REST api. That's unexpected. If you're using the same user, which should obviously have the same permissions fo... See more...
Yeah but the question is why you don't see the fields when spawning the search from the REST api. That's unexpected. If you're using the same user, which should obviously have the same permissions for knowledge objects, you should be getting the same behaviour. Just to be on the safe side - your WebUI SH is the same you're spawning your REST API search against?
Instead of doing this, we don't we NOT trigger AR in the first place. Instead, we let the Notable created, and later have another scheduled search look over the priority of the notable based on notab... See more...
Instead of doing this, we don't we NOT trigger AR in the first place. Instead, we let the Notable created, and later have another scheduled search look over the priority of the notable based on notable macro and trigger alert if >low. What say? Please hit Karma, if this helps!
@PickleRick  @dataisbeautiful  Finally find the reason. unable to use the fields from splunk rest api. thats why we couldnt get the results. I will use the _raw data to find out the 500 http code and... See more...
@PickleRick  @dataisbeautiful  Finally find the reason. unable to use the fields from splunk rest api. thats why we couldnt get the results. I will use the _raw data to find out the 500 http code and get the results. Thanks for your reply. 
Hi meetmash, sorry for the delay. What I want to achieve, is to have Splunk ES perform an adaptive response. That AR is supposed to be attached to my detection rules and notify us on an alternative c... See more...
Hi meetmash, sorry for the delay. What I want to achieve, is to have Splunk ES perform an adaptive response. That AR is supposed to be attached to my detection rules and notify us on an alternative channel (like Slack, Webex, Teams etc.). However, we only want to get notified when the urgency (after the risk correlation) of the notable is higher than low. I've been digging into this for some time and figured out that the event_id, which is required to get the urgency, can be received through makro ´get_event_id_meval´. There is also a makro named ´get_urgency´but that one does not take the risk calculation into account. Ultimately, I found other makros that seem to influence the final urgency and ended up with the following script:     [...] orig_sid = helper.settings.get('sid') events = helper.get_events() for event in events: orig_rid = event.get('rid') kwargs_blockingsearch = {"output_mode":"json", "earliest_time":"-10m", "latest_time":"now"} search_query = f"""SEARCH index=notable orig_sid={orig_sid} orig_rid={orig_rid} | eval ‘get_event_id_meval‘ rule_id=event_id | ‘get_correlations‘ | ‘get_urgency‘ | ‘risk_correlation‘""" try: job = service.jobs.create(search_query, **kwargs_blockingsearch) while not job.is_done(): sleep(.2) result_reader = results.JSONResultsReader(job.results(output_mode="json")) urgency_levels = {"critical", "high", "medium"} for result in result_reader: if isinstance(result, dict): if result.get("urgency") in urgency_levels: """ Here comes the code to notify us on alternative channel""" else: """Event not higher than low, ignore""" except Exception as e: """some exception logging"""     I tried hard coding some known sid and rid and the script worked fine. However, attaching this as AR to a detection rule doesnt trigger the notification. Any clue what I am missing?
Hi @livehybrid , Thanks for your response! The panels in right side weren't visible <row> <panel> <title>Panel 1</title> <table> <search> <query>| makeresults | eval name="p1"</query> ... See more...
Hi @livehybrid , Thanks for your response! The panels in right side weren't visible <row> <panel> <title>Panel 1</title> <table> <search> <query>| makeresults | eval name="p1"</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> <column> <row> <panel> <title>Panel 2</title> <table> <search> <query>| makeresults | eval name="p2"</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>Panel 3</title> <table> <search> <query>| makeresults | eval name="p3"</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </column> </row> Please try this. Thanks!
Hi @smanojkumar  Sorry, I got my wires crossed. It doesnt look like this is possible with XML Dashboards, however you can probably achieve this design much easier with Dashboard Studio dashboard. Is... See more...
Hi @smanojkumar  Sorry, I got my wires crossed. It doesnt look like this is possible with XML Dashboards, however you can probably achieve this design much easier with Dashboard Studio dashboard. Is there anything preventing you switching to a Dashboard Studio dashboard for this? Thanks Will