All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello!  I realize that the question is a bit particular, so I will try to explain through an example. I am indexing a json that looks like this with escaped characters and leading/trailing quotes: ... See more...
Hello!  I realize that the question is a bit particular, so I will try to explain through an example. I am indexing a json that looks like this with escaped characters and leading/trailing quotes: "{\"data\": {\"essentials\": {\"monitorCondition\": \"Resolved\",\"firedDateTime\": \"2022-09-26T14:56:41.7862462Z\",\"resolvedDateTime\": \"2022-09-26T15:02:47.9852843Z\"}}}" I need to associated _time to the following statement: If monitorCondition=Fired then parse firedDateTime as _time, otherwise parse resolvedDateTime as _time. Since the json is not understood directly by Splunk due to the escaped quotes I am attempting the following: format the _raw correctly so that it is interpreted correctly by Splunk. calculate the value to use as timestamp associate timestamp to the _time field  This is my props.conf so far: [json_test_st] KV_MODE = json DATETIME_CONFIG =  LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 500 NO_BINARY_CHECK = true TZ = GMT category = Custom disabled = false pulldown_type = 1 SEDCMD-formatjson = s/\\|^\"|\"$//g  TRANSFORMS = gettime TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%7NZ   This is my transforms.conf   [gettime] INGEST_EVAL = timestamp=if('data.essentials.monitorCondition' = "Fired",'data.essentials.firedDateTime','data.essentials.resolvedDateTime')   The result is that I can get Splunk to parse the json correctly, but it does not extract the timestamp. Could anybody give me a push in the right direction? Thank you and best regards, Andrew
I just set up the UF on my DC (it's a lab environment) and I can confirm that both are connected on the specified ports using netstat but I'm not getting any logs from my DC. I'm also using Splunk a... See more...
I just set up the UF on my DC (it's a lab environment) and I can confirm that both are connected on the specified ports using netstat but I'm not getting any logs from my DC. I'm also using Splunk add-on for windows and enabled logs for sysmon and AD only. This is my inputs  file for the add on. And I keep getting  these errors in my Splunk instance  here's also the log file for splunkd  
Hi, We've been trying to use "for loop" logic within playbook app actions. Although, there seems to be no way to achieve this out of the box. Example, we use the action "get file" - this action onl... See more...
Hi, We've been trying to use "for loop" logic within playbook app actions. Although, there seems to be no way to achieve this out of the box. Example, we use the action "get file" - this action only accepts one machine id at a time . We want to get multiple files via this action (for each item, send to "get file"), therafter send each file through a sub playbook and then return all outputs to a prompt in the main playbook with enrichment.
I know this has been already asked in the past, but it still not completely clear to me: https://community.splunk.com/t5/Alerting/Can-someone-explain-when-I-would-use-quot-Once-quot-versus-quot/m-p... See more...
I know this has been already asked in the past, but it still not completely clear to me: https://community.splunk.com/t5/Alerting/Can-someone-explain-when-I-would-use-quot-Once-quot-versus-quot/m-p/279202 https://docs.splunk.com/Documentation/Splunk/latest/Alert/AlertTriggerConditions For testing purpose to understand the topic, I have set-up the following Correlation Search that triggers a Notable Event in Splunk Enterprise Security. The same can be done also as an Alert in Splunk Enterprise:     | makeresults count=3 | streamstats count | eval value=case(count=1,"test01",count=2,"test02",count=3,"test03") | eval alert=case(count=1,"KO",count=2,"OK",count=3,"KO") | search alert="KO" |`get_event_id`|eval orig_index=index|eval orig_indexer_guid=indexer_guid|eval orig_event_hash=event_hash|eval orig_cd=_cd|eval orig_raw=_raw     The above search run every 5 minutes and generates 2 events.  I tried to change between "Once" and "For each result" but nothing change, they are always generated 2 Notable events. I was expecting: "Once": generate only 1 Notable event "For each result": generate 2 Notable events   So the question is which is the difference between "Once" and "For each result" and why there is no effect changing it in my test? Thanks a lot, Edoardo
I have the following log:      Requests over Threshold found: {"kv":{"top_requests":[{"operation_name":"get","last_dispatch_duration_us":136231,"last_remote_socket":"xx","last_local_id":"67B57F730... See more...
I have the following log:      Requests over Threshold found: {"kv":{"top_requests":[{"operation_name":"get","last_dispatch_duration_us":136231,"last_remote_socket":"xx","last_local_id":"67B57F7300000001/00000000C1E2DBA3","last_local_socket":"xxx:37894","total_dispatch_duration_us":136231,"total_server_duration_us":3,"operation_id":"0x127f1","timeout_ms":250,"last_server_duration_us":3,"total_duration_us":136516},{"operation_name":"get","last_dispatch_duration_us":135914,"last_remote_socket":"xxx","last_local_id":"67B57F7300000001/00000000C1E2DBA3","last_local_socket":"xxx:37894","total_dispatch_duration_us":135914,"total_server_duration_us":15,"operation_id":"0x127e9","timeout_ms":250,"last_server_duration_us":15,"total_duration_us":135985},{"operation_name":"get","last_dispatch_duration_us":135827,"last_remote_socket":"xxx.xxx:11210","last_local_id":"67B57F7300000001/000000006A92D90B","last_local_socket":"xxx:59306","total_dispatch_duration_us":135827,"total_server_duration_us":15,"operation_id":"0x127e7","timeout_ms":250,"last_server_duration_us":15,"total_duration_us":135946}],"total_count":3}}     How can I parse this? 
Hi,   | tstats earliest(_time) as Earliest latest(_time) as Latest where index=_internal by _time, index, sourcetype, host span=1d | eval Earliest=strftime(Earliest,"%Y-%m-%dT%H:%M:%S.%Q") | eval... See more...
Hi,   | tstats earliest(_time) as Earliest latest(_time) as Latest where index=_internal by _time, index, sourcetype, host span=1d | eval Earliest=strftime(Earliest,"%Y-%m-%dT%H:%M:%S.%Q") | eval Latest=strftime(Latest,"%Y-%m-%dT%H:%M:%S.%Q") | appendcols [tstats count where index=_internal by _time] I would like to generate the dashboard for host,sourcetype, latest event received,Total Eventcount and sparkline for count of 1month As per the above query i am getting result like this Is there any other alternative for this please suggest?  
Hello, I have an API call that is bringing in json data to my Splunk environment. When I do a basic query of the index, I can see that every event is properly parsed in the 'list view':     d... See more...
Hello, I have an API call that is bringing in json data to my Splunk environment. When I do a basic query of the index, I can see that every event is properly parsed in the 'list view':     definition: { [-] knowledgeBases: [ [+] ] name: xxxxx published: xxxxx threatIndicators: Privilege_Escalation uniqueId: xxxx } firstFound: xxxxx lastFound: xxxxx status: Active target: { [+] } RiskRating: Medium RiskScore: 4.58 Solution: Service teams should upgrade the impacted package }     This data is then being utilized in an accelerated datamodel for a few dashboards. However, I noticed when using tstats in the queries, a lot of the results were missing.  I then tried troubleshooting the data within the index using 'table' and realized i need to use |spath to get the fields to display correctly in my table. Does anyone have any insight into this, or how to possibly use spath within a tstats query?
Hello We are getting M2crypto blocker during scan (platform readiness app) for migrate to 8.2.8 . (current - 8.1.7) This block coming for few instances (HFs only) but for IDX,SHs all passed . Que... See more...
Hello We are getting M2crypto blocker during scan (platform readiness app) for migrate to 8.2.8 . (current - 8.1.7) This block coming for few instances (HFs only) but for IDX,SHs all passed . Question is  -  1. Is that a road blocker for upgrade ? 2. Can you suggest the workaround /resolve this block    
Hi everybody, I am creating a Dashboard using Splunk and I'm searching for a solution. I have a list machine according to the type from an Excel file. I have a dbxquery to get data of each mach... See more...
Hi everybody, I am creating a Dashboard using Splunk and I'm searching for a solution. I have a list machine according to the type from an Excel file. I have a dbxquery to get data of each machine from DB  then using lookup, I now can get the count of event by each type. What I want to do next, is add drilldowns in the dashboard, to distinguish the type, base on the number of machine, for ex: if there is < 50 machines, the type will list in the drilldown 1, if > 50, types will be listed in the drilldown 2.  The reason to seperate into 2 group because I want to set the timechart span differently, span =1h for drilldown 1 and span =2h for drill down 2 Here is my script: |dbxquery connection="server" query="SELECT * FROM table " |lookup lookup.csv numero OUTPUT type |eval _time=strptime(time_receive,"%Y-%m-%dT%H:%M:%S.%N") |timechart span=2h count by type | untable _time type count_event | makecontinuous | fillnull value=0 | where count_event = 0 | sort - _time Can I do something in the search, like: If I click on the drilldown 1, I'll run the search with span =1h, when I choose from drilldown 2, I'll run the search with span =2h? I also want to have option ALL in each drilldown Do you have any idea? Thanks, Julia  
I like to use savedsearches with token inside  a classic xml dashboards e.g.  <form> ... <search> <query>| savedsearch "my_savedsearch " tok_token1="$form.tok_token1$" tok_token2="$form.tok_tok... See more...
I like to use savedsearches with token inside  a classic xml dashboards e.g.  <form> ... <search> <query>| savedsearch "my_savedsearch " tok_token1="$form.tok_token1$" tok_token2="$form.tok_token2$" ..." </query> </search> ... </form> But when I want to edit the savedsearch later in the search app/dashboard, the tokens are not set, e.g. > index=xyz field1=$tok_token1$  AND field2=$tok_token2$ The only way to run and edit the query  I know is to replace temporary the token to constant values, because something like  > | eval tok_token1=value  before does not work. Is there a better way to temporary set the tokens in a  dynamic search for enhancing a query afterwards ?
Hello, I have a DBInput that have a database with a list of user with email and phone number, and people can make change to that DB, which include delete a row. The problem that I encounter is, the... See more...
Hello, I have a DBInput that have a database with a list of user with email and phone number, and people can make change to that DB, which include delete a row. The problem that I encounter is, the data that already indexed retain the deleted row in the DB, and thus the alert still send to that already deleted contact. So I want to find a way to reindex that db on a daily basis and delete the last indexed data. I have 3 solotions that I think of: 1. Setup an alert that run |delete daily and DBConnect can reindex (but I have to manually set the rising column check point to 0) 2. Batch input them daily and setup my search (it's a join in an alert) to search for -1d since it's not big of a data. 3. Join the table directly within SQL query when I indexing them, that way it'll always have the updated DB (but it'll tank on the DB server side) Which of these 3 solutions do you think is good? Or can you offer me an alternate, better solutions?
Hi, I am trying to upgrade my Splunk instance and could find the below error message for few apps , while performing the upgrade Readiness check.  Remove dependencies on the M2Crypto and Swig lib... See more...
Hi, I am trying to upgrade my Splunk instance and could find the below error message for few apps , while performing the upgrade Readiness check.  Remove dependencies on the M2Crypto and Swig libraries. how can we address this. Any suggestion, much appreciated.   Thanks.
Hello community, I am having a problem with a dashboard that I am setting up based on Splunk OnCall data, in order to see the acknowledgment and resolution times for alerts. In order to see the r... See more...
Hello community, I am having a problem with a dashboard that I am setting up based on Splunk OnCall data, in order to see the acknowledgment and resolution times for alerts. In order to see the resolution period of my alerts, I made a dashboard that shows me the right information: However, I sometimes have lines with two users displayed, and no more dates: Looking at the alert in detail, I see that the item I retrieve contains two pieces of information: One for the user who acknowledged the alert, and one for the resolution, always done by the "SYSTEM" user: In the construction of my research, I cannot "impose" to keep only the "SYSTEM" user when I display the resolved alerts (in the context of acknowledged alerts, it is simpler because I filter the states ACKED upstream):       index="oncall_prod" routingKey=* | search currentPhase=RESOLVED | dedup incidentNumber | rename transitions{}.at as ack, transitions{}.by as Utilisateur, incidentNumber as N_Incident, entityDisplayName as Nom_Incident | eval create_time = strptime(startTime,"%Y-%m-%dT%H:%M:%SZ") | eval ack_time = strptime(ack,"%Y-%m-%dT%H:%M:%SZ") | eval temps_ack = tostring((ack_time - create_time), "duration") | eval create_time=((create_time)+7200) | eval ack_time=((ack_time)+7200) | eval Debut_Incident = strftime(create_time,"%Y-%m-%d %H:%M:%S ") | eval Traitement = strftime(ack_time,"%Y-%m-%d %H:%M:%S ") | eval temps_ack = strftime(strptime(temps_ack, "%H:%M:%S"), "%H:%M:%S ") | rename temps_ack as Temps_Traitement | table N_Incident, Nom_Incident, Debut_Incident, Traitement, Temps_Traitement, Utilisateur       Do you have any idea what changes I need to make to successfully see only the user linked to the resolution? I'm sure it's a stupid thing but I can't quite put my finger on it. Best regards, Rajaion
We need a way for our custom add-on to include additional information from an alert into the cim_modactions log it writes when a failure happens.  The custom add-on's purpose is to create tickets in ... See more...
We need a way for our custom add-on to include additional information from an alert into the cim_modactions log it writes when a failure happens.  The custom add-on's purpose is to create tickets in a remote system with fields from the alert results.   Therefore, in the case of a failure to create a ticket in the remote system, it would be really helpful to know details of the alert results which failed to be sent.  We can then alert on cim_modactions in the case of action_staus=failure and be able to respond by resending that alert. (Ideally we would  modify the add on to be resilient and try to send again, however we do also need to know about these failures, because in the case of an outage on the remote side we would need to still know what had failed to be sent) Ideally we would include the entire contents of the alert result in the cim_modactions index. As nearly as we can tell the "signature" field is often filled with contextual information.  Replacing that value may be an option for us if we can find a sensible way to do so.   I go into some more detail and specificity below.  The cim_modactions index is useful in determining whether a specific action has been successful or not at our client's environment.  We send the output of our Splunk alerts to an external ticketing system through an adding we built using the Splunk Add-on Builder | Splunkbase. For the sake of this question let's call the application we built the "ticketing system TA" and the corresponding sourcetype in cim_modifications, "modular_alerts:ticketing_system".  If we search using "index=cim_modactions sourcetype="modular_alerts:ticketing_system", we return all cim_modactions about the ticketing system We can know if an alert was successfully created in the remote system if we search on: "index=cim_modactions sourcetype="modular_alerts:ticketing_system" action_status=failure We get results like:   2022-10-01 09:25:29,179 ERROR pid=1894149 tid=MainThread file=cim_actions.py:message:431 | sendmodaction - worker="search_head_fqdn" signature="HTTPSConnectionPool(host='ticketing_system_fqdn', port=443): Max retries exceeded with url: /Ticketing/system/path/to/login (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(104, 'Connection reset by peer')))" action_name="ticketing_system" search_name="Bad things might be happening" sid="scheduler__nobody_ZHNsYV91c2VfY2FzZXM__RMD5e17ae2c72132ca0f_at_1664615700_985" rid="14" app="app where search lives" user="nobody" digest_mode="0" action_mode="saved" action_status="failure" host = search_head_hostname source = /opt/splunk/var/log/splunk/ticketing_system_ta_modalert.logsourcetype = modular_alerts:ticketing_system     Notice that we get a helpful error about the reason for the failure, the search it happened during and the timestamp. Unfortunately this does not get us down to which alert or alerts failed to be sent.  In each of our searches we have a field which identified which remote application is logging. Let's call it client_application_id. If we could include that number, like client_application_id=#####, that would be a help. Even more helpful would be to include alert_result_text="<complete text of the payload being sent across to the remote system at the time of the failure>" We also noticed that if signature contains anything that looks like an assignment, then that assignment becomes a field.  for example in a few cases we actually do see client_applicaiton_id=#####, but these are few and not in the case of failures.  In these cases there is also   signature="client_application_id=#####" So if there is a way to pass additional text into "signature" from the generated modactions helper script which we modify, that may be an option for us. Any direction on solving this specific question or even a suggestion on an alternate approach would be much appreciated. (This is a better tagged and titled duplicate of How are logs written to the cim_modifications inde... - Splunk Community. The other should be deleted) @ohbuckeyeio  @starcher   
Any one have used this App. I am trying to download and  configure this app but every time receive "404" error. Below is the link trying to follow: https://docs.splunk.com/Documentation/NetApp/2.... See more...
Any one have used this App. I am trying to download and  configure this app but every time receive "404" error. Below is the link trying to follow: https://docs.splunk.com/Documentation/NetApp/2.1.91/DeployNetapp/InstalltheSplunkAppforNetAppDataONTAP Any assistance please? Thanks Sushant
Hi all,  I'm not really sure where to start here, was wanting to create a look up edit to send a daily report. Trying to track list of users who might log in after a specific date (different for ... See more...
Hi all,  I'm not really sure where to start here, was wanting to create a look up edit to send a daily report. Trying to track list of users who might log in after a specific date (different for each user) Any ideas? Thanks
I'm using Splunk SOAR 5.3.3. When I add 10 outputs for a playbook, the warning text appear "Limit 10 outputs reached".  Can I extend the limits of output in my playbook? 
Hi,  multiple Forwarders stops sending data for no reason for every 20 days , but when a restart is done, all starts sending normally. there are no warning or error logs in splunkd either. not sur... See more...
Hi,  multiple Forwarders stops sending data for no reason for every 20 days , but when a restart is done, all starts sending normally. there are no warning or error logs in splunkd either. not sure what's causing the issue. This issue is happening on same forwarders every time. 
Hello, I have a stream of  call data records in xml form coming into splunk and i would like to add some ingestion-time transformations to it.  However I have broken the input at least twice, so I ... See more...
Hello, I have a stream of  call data records in xml form coming into splunk and i would like to add some ingestion-time transformations to it.  However I have broken the input at least twice, so I need a debugging setup. I ran a packet capture to get about three minutes worth of the stream (500 or so megabytes) and stripped out the xml data into a raw text file.  I am going to "ingest" this file into a test server. How do I dump the contents of an index so i can re-import the same data over and over again to test my transforms? --jason      
I am on a new install of Splunk 9.0.1 and add-on builder 4.1.1. - creating a python script with check pointing as a Data Collection input. - testing (with Test button) works as intended on the Edit... See more...
I am on a new install of Splunk 9.0.1 and add-on builder 4.1.1. - creating a python script with check pointing as a Data Collection input. - testing (with Test button) works as intended on the Edit Data Input - after publishing I am not able to create a new input to start collecting events - The script works as intended without the checkpoints - Code of the check point below: for service in r_json["response"]: state = helper.get_check_point(str(service["name"]) + str(service["appstack"])) if state is None: final_result.append(service) helper.save_check_point(str(service["name"]) + str(service["appstack"]), "Indexed") #helper.delete_check_point(str(service["name"]) + str(service["appstack"])) - Index="_internal" error are below ERROR ExecProcessor [1464594 ExecProcessor] - message from "/Applications/Splunk/bin/python3.7 /Applications/Splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" RuntimeError: assist binary not found, path=/Applications/Splunk/etc/apps/splunk_assist/bin/darwin_x86_64/assistsup ERROR ExecProcessor [1464594 ExecProcessor] - message from "/Applications/Splunk/bin/python3.7 /Applications/Splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" raise RuntimeError(f'assist binary not found, path={full_path}') ERROR ExecProcessor [1464594 ExecProcessor] - message from "/Applications/Splunk/bin/python3.7 /Applications/Splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" File "/Applications/Splunk/etc/apps/splunk_assist/bin/assist/supervisor/context.py", line 41, in _test_supervisory_binary ERROR ExecProcessor [1464594 ExecProcessor] - message from "/Applications/Splunk/bin/python3.7 /Applications/Splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" _test_supervisory_binary(base_path)