All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick  @dataisbeautiful  Finally find the reason. unable to use the fields from splunk rest api. thats why we couldnt get the results. I will use the _raw data to find out the 500 http code and... See more...
@PickleRick  @dataisbeautiful  Finally find the reason. unable to use the fields from splunk rest api. thats why we couldnt get the results. I will use the _raw data to find out the 500 http code and get the results. Thanks for your reply. 
Hi meetmash, sorry for the delay. What I want to achieve, is to have Splunk ES perform an adaptive response. That AR is supposed to be attached to my detection rules and notify us on an alternative c... See more...
Hi meetmash, sorry for the delay. What I want to achieve, is to have Splunk ES perform an adaptive response. That AR is supposed to be attached to my detection rules and notify us on an alternative channel (like Slack, Webex, Teams etc.). However, we only want to get notified when the urgency (after the risk correlation) of the notable is higher than low. I've been digging into this for some time and figured out that the event_id, which is required to get the urgency, can be received through makro ´get_event_id_meval´. There is also a makro named ´get_urgency´but that one does not take the risk calculation into account. Ultimately, I found other makros that seem to influence the final urgency and ended up with the following script:     [...] orig_sid = helper.settings.get('sid') events = helper.get_events() for event in events: orig_rid = event.get('rid') kwargs_blockingsearch = {"output_mode":"json", "earliest_time":"-10m", "latest_time":"now"} search_query = f"""SEARCH index=notable orig_sid={orig_sid} orig_rid={orig_rid} | eval ‘get_event_id_meval‘ rule_id=event_id | ‘get_correlations‘ | ‘get_urgency‘ | ‘risk_correlation‘""" try: job = service.jobs.create(search_query, **kwargs_blockingsearch) while not job.is_done(): sleep(.2) result_reader = results.JSONResultsReader(job.results(output_mode="json")) urgency_levels = {"critical", "high", "medium"} for result in result_reader: if isinstance(result, dict): if result.get("urgency") in urgency_levels: """ Here comes the code to notify us on alternative channel""" else: """Event not higher than low, ignore""" except Exception as e: """some exception logging"""     I tried hard coding some known sid and rid and the script worked fine. However, attaching this as AR to a detection rule doesnt trigger the notification. Any clue what I am missing?
Hi @livehybrid , Thanks for your response! The panels in right side weren't visible <row> <panel> <title>Panel 1</title> <table> <search> <query>| makeresults | eval name="p1"</query> ... See more...
Hi @livehybrid , Thanks for your response! The panels in right side weren't visible <row> <panel> <title>Panel 1</title> <table> <search> <query>| makeresults | eval name="p1"</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> <column> <row> <panel> <title>Panel 2</title> <table> <search> <query>| makeresults | eval name="p2"</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>Panel 3</title> <table> <search> <query>| makeresults | eval name="p3"</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </column> </row> Please try this. Thanks!
Hi @smanojkumar  Sorry, I got my wires crossed. It doesnt look like this is possible with XML Dashboards, however you can probably achieve this design much easier with Dashboard Studio dashboard. Is... See more...
Hi @smanojkumar  Sorry, I got my wires crossed. It doesnt look like this is possible with XML Dashboards, however you can probably achieve this design much easier with Dashboard Studio dashboard. Is there anything preventing you switching to a Dashboard Studio dashboard for this? Thanks Will
Hello There, I'm having 3 panles, where i need to display panel 1 in left side, In the same row I need to display Panle 2 and Panel 3 in left side in a stacked way. Is there is possibel way in Cl... See more...
Hello There, I'm having 3 panles, where i need to display panel 1 in left side, In the same row I need to display Panle 2 and Panel 3 in left side in a stacked way. Is there is possibel way in Classic dashboard in Splunk? | Left     | Top-Right | | Panel  |----------      | |               | Bot-Right| Looking forward for the resposne. Thanks!
@ITWhisperer thanks for the code. It is working but my doubt as per your comment - do I replace $app_name$ token with $app_name_choice$ in all my panels? Because even though I didn't change my panels... See more...
@ITWhisperer thanks for the code. It is working but my doubt as per your comment - do I replace $app_name$ token with $app_name_choice$ in all my panels? Because even though I didn't change my panels are refreshing at the moment according to multiselect options given. Please confirm do I need to replace?
We investigated this add-on, but altough it mentions TRAP, there is no information provided to configure it. TRAP Cloud integration method, as far as I know, is by API.
@TheJagoff  Please take a look. Is it related to the same? Fixed issues - Splunk Documentation Slow indexer/receiver detection capability - Splunk Community Splunk crash during tcpout (outputs.co... See more...
@TheJagoff  Please take a look. Is it related to the same? Fixed issues - Splunk Documentation Slow indexer/receiver detection capability - Splunk Community Splunk crash during tcpout (outputs.conf) reload - Splunk Community  
Hello. I noticed on a U/F, "Splunk destroying TcpOutputClient during shutdown/reload" as a level INFO and happens 4 or 5 times a minute for each of the 3 indexers. The U/F has been running for quite... See more...
Hello. I noticed on a U/F, "Splunk destroying TcpOutputClient during shutdown/reload" as a level INFO and happens 4 or 5 times a minute for each of the 3 indexers. The U/F has been running for quite some time and is not in a shutdown/reload situation and I am receiving events both _internal and OS data from the TA_Splunk_nix  from it. Is destroying a connection a normal message and what would cause that? I can't seem to find anything online about this message.
@PickleRick Yes. I'm using the same user with both Web UI and Rest API access. 
@saif_almaskari Look for any error messages in the Splunk internal logs that might give you a clue about what's going wrong.
That is interesting. Are you using the same user to search from WebUI as you're using for API access? If not, that could mean some differences in permissions to knowledge objects - in your case - fi... See more...
That is interesting. Are you using the same user to search from WebUI as you're using for API access? If not, that could mean some differences in permissions to knowledge objects - in your case - field extractions.
@PickleRick Yes. I have web UI access also. When I search the query in Web splunk,  I get the results. the same query when i execute it in splunk rest api via python script, not getting any results. ... See more...
@PickleRick Yes. I have web UI access also. When I search the query in Web splunk,  I get the results. the same query when i execute it in splunk rest api via python script, not getting any results. I dont know why.   
I was using the Microsoft 365 App for Splunk and all of a sudden it stopped working and receiving any events or logs, I have tried everything and went back and backtracked all the installation steps,... See more...
I was using the Microsoft 365 App for Splunk and all of a sudden it stopped working and receiving any events or logs, I have tried everything and went back and backtracked all the installation steps, everything seems to be in order, but I still do not receive any new information
@dataisbeautifulThis is not needed. The string is defined with triple single quotes as long string and therefore double quotes do not need to be escaped. @BalajiRaju  If your base search returns val... See more...
@dataisbeautifulThis is not needed. The string is defined with triple single quotes as long string and therefore double quotes do not need to be escaped. @BalajiRaju  If your base search returns values and your filtering part causes it to not return any events at all, that would mean that you're filtering it wrong. There can be several reasons, most obvious would be that the httpcode field isn't properly extracted from the events (or simply your data doesn't have any 500 results). Do you have any webui access or is REST the only way you're accessing your Splunk installation?
Proofpoint Essentials is - as far as I remember - a simplified Proofpoint on Demand service. Proofpoint Enterprise can be deployed as either Proofpoint-managed Proofpoint on Demand service or an on-... See more...
Proofpoint Essentials is - as far as I remember - a simplified Proofpoint on Demand service. Proofpoint Enterprise can be deployed as either Proofpoint-managed Proofpoint on Demand service or an on-premise Proofpoint Protection Server installation. As I understand, you're using Essentials so you're not interested in an on-premise installation. So your only way to get the detailed email flow info would be to upgrade to Enterprise and license the Remote Syslog Forwarding feature. Then you can set up your own TLS-secured "syslog" receiver and push the events from your PoD instance. Essentials is a simplified service for small businesses and therefore doesn't have all the bells and whistles that "full" Enterprise setup has. But is way cheaper as I remember.
@BalajiRaju  When wrapping your query in quotes, do you escape the ones contained inside? For example query=" index=\"name\" "  
Thank you again for the support @gcusello  I currently don´t have visibility on _audit index in splunk. Do you maybe know if it is possible as well to filter the data based on the user type ? like f... See more...
Thank you again for the support @gcusello  I currently don´t have visibility on _audit index in splunk. Do you maybe know if it is possible as well to filter the data based on the user type ? like for example : user=admin ? what other users in splunk would exist with administrative privileges as well ? Are there any standard fields that exist in the _audit index that you think are enough to be archived while delivering the important details of the audit event ? I would really appreciate any help !
Hi @rahulkumar , check if the fields you used in json_extract are correct (they should be): you can do this in Splunk Search. Ciao. Giuseppe
Hi @tscroggins - thanks for the pointer - I removed datasources { ... } from this defaults section and kept only tokens { ... } - and it worked.