All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am looking for a App that I can take and taylor to a list of eventids that we want to Audit. We like the Eventid.net app but the list of IDs is pretty limited. I would like to take that and maybe j... See more...
I am looking for a App that I can take and taylor to a list of eventids that we want to Audit. We like the Eventid.net app but the list of IDs is pretty limited. I would like to take that and maybe just add more IDs to the lookup table or somehow taylor it for us. Any suggestions would be appreciated.
My goal is to design an alert that will populate a table of raw results, but only when certain evaluation aggregates apply. For example, if the total count of events in a time frame >100, post table ... See more...
My goal is to design an alert that will populate a table of raw results, but only when certain evaluation aggregates apply. For example, if the total count of events in a time frame >100, post table of raw data. How do I achieve this limitation (similar to SQL "Having"), while reserving my desired table output? My query so far, which reflects the table output I desire without the "Having" logic: splunk_server=indexer* index=wsi sourcetype=fdpwsiperf (channel_type=ofx2 OR agent_service=OfxAgent) domain=tax api_version=v1 capability=* tax_year=2019 partnerId!=*test* partnerId="ADP" | lookup Provider_Alert.csv Provider_ID AS partnerId OUTPUT Tier Form_Type | search Tier=Tier1 | eval capability=if(like(capability,"109%"),"1099",'capability') | eval error_category=case(like(http_status_code_host,"5%"), "5XX", like(http_status_code_host,"4%"),"4XX", http_error_host="Read timed out", 'http_error_host', 1==1, "Other") | table _time, partnerId, intuit_tid, error_category, capability, tax_year, ofx_appid, host_base_url | rename intuit_tid as TRNUID Do not direct me to "From SQL to Splunk SPL" documentation. I've reviewed it, and it's not helpful for my use case. Thanks!
Loving this viz and the Number Display viz. However, I'm having a problem getting drilldowns to work. If I set the drilldown to Link to Search, on clicking a value, it runs the basic search and se... See more...
Loving this viz and the Number Display viz. However, I'm having a problem getting drilldowns to work. If I set the drilldown to Link to Search, on clicking a value, it runs the basic search and selects the value clicked on. I haven't been able to create a working token to send to a search on the same dashboard. There's a setting in General / Click action : set tokens to $sunburst_viz_{field}$ and I've assigned that to a form.field_name , but on clicking the field value in the Sunburst, the search will search on the literal variable set in the token. How is the drilldown supposed to work with Sunburst and Number Display? Thank you!
The context: We have an integration between a tool and AD using agents. Every so often, the tool reports that the agent disconnected, and then about 5-20 minutes later, it'll say the agent reconnecte... See more...
The context: We have an integration between a tool and AD using agents. Every so often, the tool reports that the agent disconnected, and then about 5-20 minutes later, it'll say the agent reconnected. I already have a search that uses transaction to get me what I need in general, but it's not quite what I'm looking for. The draft: index="connector" eventType="ad.agent.connect" | rex field=target "\"displayName\": \"(?<agent>[^\"]+)\".+" | transaction agent startswith="ad.agent.disconnected" endswith="ad.agent.reconnected" | table _time, displayMessage, agent | sort _time What I actually want: Only events that do not have an event "ad.agent.reconnected" within 30 minutes of the "ad.agent.disconnected" event. maxspan isn't doing it for me; I need something more like minspan , or invert=true , or something. The agent name isn't unique enough to go "if you never see this field again". Help?
Please help me to create a search, where I need to detect any anomaly of any host sending excessive logs with compare to the last hour. For eg, if a host is sending x events this hour and next hou... See more...
Please help me to create a search, where I need to detect any anomaly of any host sending excessive logs with compare to the last hour. For eg, if a host is sending x events this hour and next hour if it will send x+25% then we should get a trigger.
I have couple of text boxes (Tracking no and Track Type) in my bashboard and both are optional. <fieldset submitButton="true" autoRun="false"> <input type="text" token="TrackingNo"> <l... See more...
I have couple of text boxes (Tracking no and Track Type) in my bashboard and both are optional. <fieldset submitButton="true" autoRun="false"> <input type="text" token="TrackingNo"> <label>Tracking Number</label> <default></default> <change> <condition value=""> <set token="TrackingNo">*</set> </condition> </change> </input> <input type="text" token="Tracktype"> <label>Tracktype</label> <default></default> <change> <condition value=""> <set token="Tracktype">*</set> </condition> </change> </input> </fieldset> Scenario 1: Once the user clicks submit button with out any input, dashboard should display all the data. Scenario 2: By giving both values, it should fetch all the records exactly matching with Tracking no and Track Type Scenario 3: By giving only Track no, it should fetch all the records matching with Tracking no, irrespective of Track type (With above simple XML code, track type is supplied as . ) *Scenario 4:** By giving only Track type, it should fetch all the records matching with Tracking type, irrespective of Track no. (With above simple XML code, Tracking no is supplied as *. ) Please help me to construct the search query
System OS ABC Windows-Server-2016 ABC Windows-10-Enterprise ABC Mac-OSX DEF Windows Server-2016 DEF Windows Server-2012 DEF Red Hat v8.2 Above is a little generic data that is in a CSV/lookup... See more...
System OS ABC Windows-Server-2016 ABC Windows-10-Enterprise ABC Mac-OSX DEF Windows Server-2016 DEF Windows Server-2012 DEF Red Hat v8.2 Above is a little generic data that is in a CSV/lookup, there is a "System" and "OS" field. I have one drop-down that filters by a system that works by dynamically populating. I want to add another drop-down that is static, that filters by server/non-server: Windows-10-Enterprise, OSX, etc would be "Non-Server" Red Hat v8.2, WIndows Server-2012, Windows Server-2016, etc would be "Server". * would be for all OS I tried adding these as static options, but I can't seem to get it to work. Only "*" works for an all option. Any ideas?
In my Phantom playbook, I'm using a custom code block to generate a string (specifically, a Python dictionary representing matches between two sets of data) that I'd like to add to the container as a... See more...
In my Phantom playbook, I'm using a custom code block to generate a string (specifically, a Python dictionary representing matches between two sets of data) that I'd like to add to the container as an artifact. At the end of the playbook, I'll attach that artifact to an email that will be sent out. I'm using the Phantom app with action "Add Artifact" and have not been successful in adding my string as an artifact. Here are the prompts in the app and the values I'm putting in them: name: matches container_id: [blank, as it's optional] label: event source_data_identifier: matches cef_name: matches cef_value: Search_URL_Content:custom_function:matches (the cef name for the string I'm interested in) cef_dictionary: [blank, as it's optional] contains: "matches": ["text"] Every time I run the playbook, I get the following error from Add Artifact: 'add_artifact_1' on asset 'phantom': 1 action failed. (1)For Parameter: {"cef_name":"matches","cef_value"[the string i want to add as an artifact]","contains":"text","context":{"artifact_id":0,"guid":"23efc7d2-f15b-4cb5-a083-f08793cd551d","parent_action_run":[]},"label":"event","name":"matches","source_data_identifier":"matches"} Message: "Error from server. Status code: 400, Details: each value in cef_types must be a list of strings indicating the possible types " I've been working this for several hours and can't find examples to go on....can anyone offer assistance as to what I should enter into these fields to fix this error? Thanks!
I'm trying to get the Splunk Enterprise Security Malware dashboards to populate: I'm ingesting data from symantec using the Splunk_TA_symantec-ep V3.0.1 TA which is indexing my data correctly from... See more...
I'm trying to get the Splunk Enterprise Security Malware dashboards to populate: I'm ingesting data from symantec using the Splunk_TA_symantec-ep V3.0.1 TA which is indexing my data correctly from the dump files on the symantec management server. These are being forwarded by UF to an indexer (also with the app installed) and finally to the search head (also with the app installed) In the Enterprise Security app I have checked that the data model receives data following this YT Demystifying the CIM I can run Pivot searches and confirm that the data is being correctly picked up with associated event fields as descibed in the video and evidenced by the screenshot below. I can confirm the Data Model can access the index storing the symantec data via the CIM-configuration, and I have accelerated the data model. But still no population of the dashboards. Any help would be greatly appreciated.
We are configuring the GCP app for Splunk. The Configuration tab is set up, but the Inputs page has no "create new" button. It is just an empty page. The docs page says to use "create new" to add ... See more...
We are configuring the GCP app for Splunk. The Configuration tab is set up, but the Inputs page has no "create new" button. It is just an empty page. The docs page says to use "create new" to add an input, but doesn't mention it not being present. https://docs.splunk.com/Documentation/AddOns/released/GoogleCloud/Setupv2 Using: Splunk Cloud 7.2 Splunk Add-on for Google Cloud Platform 3.0.0
Hi, As a temporary measure (for 3 months), we have been asked to set-up one of the splunk server (HF) to work as syslog server which should receive logs from trend for security events. I have g... See more...
Hi, As a temporary measure (for 3 months), we have been asked to set-up one of the splunk server (HF) to work as syslog server which should receive logs from trend for security events. I have gone through few URLs and pages from other questions but couldn't see end-to-end set-up, reason being I can't see any service with syslog in my server as such but by default we have rsyslog service running (I've configured and tested between UB as server and HF as client but however I've been asked to use syslog instead rsyslog for temp purpose). For my local testing, I am trying to test my UB as server and HF as client before we actually gets logs from trend (Which will be our actual client), in client (We're asked to use TCP only no UDP) what are all settings required (where to configure server details )again as I said I dont see anything specific to syslog but rsyslog (is this valid usecase? ) however port I've enabled under /etc/services as : syslog myport/tcp #syslogclient From server side (in UB) configured inputs.conf with the client info. any leads on specifically on syslog configurations ( also with TCP we will have data lose with this mechanism?) any documentation link would be helpful. Thanks.
I need to get the list of triggered alerts, and I've been searching and executing queries in Splunk, but none gives me what I need. In this list, the triggered alerts must be with their respective ... See more...
I need to get the list of triggered alerts, and I've been searching and executing queries in Splunk, but none gives me what I need. In this list, the triggered alerts must be with their respective time, and it must be through a search, for the ease of being able to download the results in a CSV and from there make statistics. Beforehand thank you very much.
I have this JSON, and I want extrac the value when the name is " ca-channel " and value when name is " Ca-Request-Id " but this data in one row, for example: channel | requestId w ... See more...
I have this JSON, and I want extrac the value when the name is " ca-channel " and value when name is " Ca-Request-Id " but this data in one row, for example: channel | requestId w | 000001707ce0ca4c-58e1e56
Hello , I have data from 2 diff source with same fields as shown below : index= sourcetype= source= test.txt device_name="alpha" pool_name="a" device_name="beta" pool_name="b" device_name="gamm... See more...
Hello , I have data from 2 diff source with same fields as shown below : index= sourcetype= source= test.txt device_name="alpha" pool_name="a" device_name="beta" pool_name="b" device_name="gamma" pool_name="c" index= sourcetype= source=test1.txt device_name="alpha" pool_name="a" device_name="beta" pool_name="b" device_name="gamma" pool_name="z" eval actual_pools = toString(device_name) + ";" + toString(pool_name) I am looking for field actual_pools using raw data which i created above which exist in source=test1.txt but not in source=test.txt . tried using join and append but unable to compare . Please help with the same , thanks.
I want to display the events having a FAIL value in any of the columns. For Eg : Please help me on this!
Good morning, since I've been working from home using VPN access to connect to the office I noticed, I haven't been able to access my companies external email server. is there a Splunk Query I can r... See more...
Good morning, since I've been working from home using VPN access to connect to the office I noticed, I haven't been able to access my companies external email server. is there a Splunk Query I can run to give me a little more insight into why I am unable to access that external email server? Any assistance in that regard would be greatly appreciated. Thanks Cosmo
Hi everyone, I would like to know if there is any way to merge or combine the results of two or more rows into one single in a table, for example: If there any posibility to change that resu... See more...
Hi everyone, I would like to know if there is any way to merge or combine the results of two or more rows into one single in a table, for example: If there any posibility to change that result for something like this:
Hello. I am running SNMP Modular Input version 1.5 on Splunk Enterprise 7.2.7. I have setup the following SNMP input to poll the power supply status from switches. I am able to poll up to thr... See more...
Hello. I am running SNMP Modular Input version 1.5 on Splunk Enterprise 7.2.7. I have setup the following SNMP input to poll the power supply status from switches. I am able to poll up to three IP addresses simultaneously with this input. But as soon as I add a fourth or fifth IP address to be polled, things are becoming weird. It changes if only 1, or 2 or mabye 3 IPs are polled but never all 4 or 5. I will need to go up to 28 IPs to be polled by this SNMP input. I have added the error messages form the splunkd.log file. Has please anyone an idea whats going wrong here? Thank you Karl [snmp://Switch Power Supplies] activation_key = 91C3A8052D3B6BB033AC165FDF24462E communitystring = snmpcommunity destination = 192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.104 do_bulk_get = 0 do_get_subtree = 0 index = snmppoller ipv6 = 0 object_names = 1.3.6.1.4.1.1588.2.1.1.1.1.22.1.3.7,1.3.6.1.4.1.1588.2.1.1.1.1.22.1.3.8 snmp_mode = attributes snmp_version = 2C snmpinterval = 120 sourcetype = snmp_ta split_bulk_output = 0 trap_rdns = 0 v3_authProtocol = usmHMACMD5AuthProtocol v3_privProtocol = usmDESPrivProtocol port = 161 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" Exception with getCmd to 192.168.1.104:161: poll error: Traceback (most recent call last): 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" ; File "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\pysnmp-4.2.5-py2.7.egg\pysnmp\carrier\asynsock\dispatch.py", line 37, in runDispatcher 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" use_poll=True, map=self.__sockMap, count=1) 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" ; File "D:\Program Files\Splunk\Python-2.7\Lib\asyncore.py", line 220, in loop 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" poll_fun(timeout, map) 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" ; File "D:\Program Files\Splunk\Python-2.7\Lib\asyncore.py", line 156, in poll 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" read(obj) 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" ; File "D:\Program Files\Splunk\Python-2.7\Lib\asyncore.py", line 87, in read 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" obj.handle_error() 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" ; File "D:\Program Files\Splunk\Python-2.7\Lib\asyncore.py", line 83, in read 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" obj.handle_read_event() 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" ; File "D:\Program Files\Splunk\Python-2.7\Lib\asyncore.py", line 449, in handle_read_event 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" self.handle_read() 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" ; File "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\pysnmp-4.2.5-py2.7.egg\pysnmp\carrier\asynsock\dgram\base.py", line 91, in handle_read 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" raise error.CarrierError('recvfrom() failed: %s' % (sys.exc_info()[1],)) 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" ;CarrierError: recvfrom() failed: [Errno 10035] A non-blocking socket operation could not be completed immediately 05-15-2020 15:14:24.987 +0200 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\snmp_ta\bin\snmp.py"" snmp_stanza:snmp://Switch Power Supplies
The edit dashboard has the Schedule PDF Delivery grayed out. I am root and have all the permissions I believe I need, but it remains grayed out. I'm on version 7.2.6. I verified that at least admi... See more...
The edit dashboard has the Schedule PDF Delivery grayed out. I am root and have all the permissions I believe I need, but it remains grayed out. I'm on version 7.2.6. I verified that at least admin had the schedule_search and admin_all_objects and it does and even restarted splunk, but the feature remains grayed out. Can anyone assist.
Hi Guys, I have a JSON file for OS type in some cluster like below: { "clusterA": ubuntu, "clusterA": ubuntu, "clusterA": rhel5, "clusterA": sles11, "clusterB": sles11, "clust... See more...
Hi Guys, I have a JSON file for OS type in some cluster like below: { "clusterA": ubuntu, "clusterA": ubuntu, "clusterA": rhel5, "clusterA": sles11, "clusterB": sles11, "clusterB": sles11, "clusterB": ubuntu, "clusterC": centos, "clusterC": ubuntu ... } I'd like sum the OS type for each cluster, like in above sample, 2 ubuntu in clusterA, 1 rhel5 in clusterA etc. Would you please kindly help out? Thank you!