All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to upgrade my Splunk instance and could find the below error message for few apps , while performing the upgrade Readiness check.  Remove dependencies on the M2Crypto and Swig lib... See more...
Hi, I am trying to upgrade my Splunk instance and could find the below error message for few apps , while performing the upgrade Readiness check.  Remove dependencies on the M2Crypto and Swig libraries. how can we address this. Any suggestion, much appreciated.   Thanks.
Hello community, I am having a problem with a dashboard that I am setting up based on Splunk OnCall data, in order to see the acknowledgment and resolution times for alerts. In order to see the r... See more...
Hello community, I am having a problem with a dashboard that I am setting up based on Splunk OnCall data, in order to see the acknowledgment and resolution times for alerts. In order to see the resolution period of my alerts, I made a dashboard that shows me the right information: However, I sometimes have lines with two users displayed, and no more dates: Looking at the alert in detail, I see that the item I retrieve contains two pieces of information: One for the user who acknowledged the alert, and one for the resolution, always done by the "SYSTEM" user: In the construction of my research, I cannot "impose" to keep only the "SYSTEM" user when I display the resolved alerts (in the context of acknowledged alerts, it is simpler because I filter the states ACKED upstream):       index="oncall_prod" routingKey=* | search currentPhase=RESOLVED | dedup incidentNumber | rename transitions{}.at as ack, transitions{}.by as Utilisateur, incidentNumber as N_Incident, entityDisplayName as Nom_Incident | eval create_time = strptime(startTime,"%Y-%m-%dT%H:%M:%SZ") | eval ack_time = strptime(ack,"%Y-%m-%dT%H:%M:%SZ") | eval temps_ack = tostring((ack_time - create_time), "duration") | eval create_time=((create_time)+7200) | eval ack_time=((ack_time)+7200) | eval Debut_Incident = strftime(create_time,"%Y-%m-%d %H:%M:%S ") | eval Traitement = strftime(ack_time,"%Y-%m-%d %H:%M:%S ") | eval temps_ack = strftime(strptime(temps_ack, "%H:%M:%S"), "%H:%M:%S ") | rename temps_ack as Temps_Traitement | table N_Incident, Nom_Incident, Debut_Incident, Traitement, Temps_Traitement, Utilisateur       Do you have any idea what changes I need to make to successfully see only the user linked to the resolution? I'm sure it's a stupid thing but I can't quite put my finger on it. Best regards, Rajaion
We need a way for our custom add-on to include additional information from an alert into the cim_modactions log it writes when a failure happens.  The custom add-on's purpose is to create tickets in ... See more...
We need a way for our custom add-on to include additional information from an alert into the cim_modactions log it writes when a failure happens.  The custom add-on's purpose is to create tickets in a remote system with fields from the alert results.   Therefore, in the case of a failure to create a ticket in the remote system, it would be really helpful to know details of the alert results which failed to be sent.  We can then alert on cim_modactions in the case of action_staus=failure and be able to respond by resending that alert. (Ideally we would  modify the add on to be resilient and try to send again, however we do also need to know about these failures, because in the case of an outage on the remote side we would need to still know what had failed to be sent) Ideally we would include the entire contents of the alert result in the cim_modactions index. As nearly as we can tell the "signature" field is often filled with contextual information.  Replacing that value may be an option for us if we can find a sensible way to do so.   I go into some more detail and specificity below.  The cim_modactions index is useful in determining whether a specific action has been successful or not at our client's environment.  We send the output of our Splunk alerts to an external ticketing system through an adding we built using the Splunk Add-on Builder | Splunkbase. For the sake of this question let's call the application we built the "ticketing system TA" and the corresponding sourcetype in cim_modifications, "modular_alerts:ticketing_system".  If we search using "index=cim_modactions sourcetype="modular_alerts:ticketing_system", we return all cim_modactions about the ticketing system We can know if an alert was successfully created in the remote system if we search on: "index=cim_modactions sourcetype="modular_alerts:ticketing_system" action_status=failure We get results like:   2022-10-01 09:25:29,179 ERROR pid=1894149 tid=MainThread file=cim_actions.py:message:431 | sendmodaction - worker="search_head_fqdn" signature="HTTPSConnectionPool(host='ticketing_system_fqdn', port=443): Max retries exceeded with url: /Ticketing/system/path/to/login (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(104, 'Connection reset by peer')))" action_name="ticketing_system" search_name="Bad things might be happening" sid="scheduler__nobody_ZHNsYV91c2VfY2FzZXM__RMD5e17ae2c72132ca0f_at_1664615700_985" rid="14" app="app where search lives" user="nobody" digest_mode="0" action_mode="saved" action_status="failure" host = search_head_hostname source = /opt/splunk/var/log/splunk/ticketing_system_ta_modalert.logsourcetype = modular_alerts:ticketing_system     Notice that we get a helpful error about the reason for the failure, the search it happened during and the timestamp. Unfortunately this does not get us down to which alert or alerts failed to be sent.  In each of our searches we have a field which identified which remote application is logging. Let's call it client_application_id. If we could include that number, like client_application_id=#####, that would be a help. Even more helpful would be to include alert_result_text="<complete text of the payload being sent across to the remote system at the time of the failure>" We also noticed that if signature contains anything that looks like an assignment, then that assignment becomes a field.  for example in a few cases we actually do see client_applicaiton_id=#####, but these are few and not in the case of failures.  In these cases there is also   signature="client_application_id=#####" So if there is a way to pass additional text into "signature" from the generated modactions helper script which we modify, that may be an option for us. Any direction on solving this specific question or even a suggestion on an alternate approach would be much appreciated. (This is a better tagged and titled duplicate of How are logs written to the cim_modifications inde... - Splunk Community. The other should be deleted) @ohbuckeyeio  @starcher   
Any one have used this App. I am trying to download and  configure this app but every time receive "404" error. Below is the link trying to follow: https://docs.splunk.com/Documentation/NetApp/2.... See more...
Any one have used this App. I am trying to download and  configure this app but every time receive "404" error. Below is the link trying to follow: https://docs.splunk.com/Documentation/NetApp/2.1.91/DeployNetapp/InstalltheSplunkAppforNetAppDataONTAP Any assistance please? Thanks Sushant
Hi all,  I'm not really sure where to start here, was wanting to create a look up edit to send a daily report. Trying to track list of users who might log in after a specific date (different for ... See more...
Hi all,  I'm not really sure where to start here, was wanting to create a look up edit to send a daily report. Trying to track list of users who might log in after a specific date (different for each user) Any ideas? Thanks
I'm using Splunk SOAR 5.3.3. When I add 10 outputs for a playbook, the warning text appear "Limit 10 outputs reached".  Can I extend the limits of output in my playbook? 
Hi,  multiple Forwarders stops sending data for no reason for every 20 days , but when a restart is done, all starts sending normally. there are no warning or error logs in splunkd either. not sur... See more...
Hi,  multiple Forwarders stops sending data for no reason for every 20 days , but when a restart is done, all starts sending normally. there are no warning or error logs in splunkd either. not sure what's causing the issue. This issue is happening on same forwarders every time. 
Hello, I have a stream of  call data records in xml form coming into splunk and i would like to add some ingestion-time transformations to it.  However I have broken the input at least twice, so I ... See more...
Hello, I have a stream of  call data records in xml form coming into splunk and i would like to add some ingestion-time transformations to it.  However I have broken the input at least twice, so I need a debugging setup. I ran a packet capture to get about three minutes worth of the stream (500 or so megabytes) and stripped out the xml data into a raw text file.  I am going to "ingest" this file into a test server. How do I dump the contents of an index so i can re-import the same data over and over again to test my transforms? --jason      
I am on a new install of Splunk 9.0.1 and add-on builder 4.1.1. - creating a python script with check pointing as a Data Collection input. - testing (with Test button) works as intended on the Edit... See more...
I am on a new install of Splunk 9.0.1 and add-on builder 4.1.1. - creating a python script with check pointing as a Data Collection input. - testing (with Test button) works as intended on the Edit Data Input - after publishing I am not able to create a new input to start collecting events - The script works as intended without the checkpoints - Code of the check point below: for service in r_json["response"]: state = helper.get_check_point(str(service["name"]) + str(service["appstack"])) if state is None: final_result.append(service) helper.save_check_point(str(service["name"]) + str(service["appstack"]), "Indexed") #helper.delete_check_point(str(service["name"]) + str(service["appstack"])) - Index="_internal" error are below ERROR ExecProcessor [1464594 ExecProcessor] - message from "/Applications/Splunk/bin/python3.7 /Applications/Splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" RuntimeError: assist binary not found, path=/Applications/Splunk/etc/apps/splunk_assist/bin/darwin_x86_64/assistsup ERROR ExecProcessor [1464594 ExecProcessor] - message from "/Applications/Splunk/bin/python3.7 /Applications/Splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" raise RuntimeError(f'assist binary not found, path={full_path}') ERROR ExecProcessor [1464594 ExecProcessor] - message from "/Applications/Splunk/bin/python3.7 /Applications/Splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" File "/Applications/Splunk/etc/apps/splunk_assist/bin/assist/supervisor/context.py", line 41, in _test_supervisory_binary ERROR ExecProcessor [1464594 ExecProcessor] - message from "/Applications/Splunk/bin/python3.7 /Applications/Splunk/etc/apps/splunk_assist/bin/uiassets_modular_input.py" _test_supervisory_binary(base_path)  
Hello team, Were there any announcements about latest PHP version support by AppDynamics? Currently this version does not pass requirements check Thank you in advance
I know this is now what Splunk is, but since we have so much of our current monitoring built into it, I wanted to see if I can just add this also. I am looking to create a dashboard that a user pull... See more...
I know this is now what Splunk is, but since we have so much of our current monitoring built into it, I wanted to see if I can just add this also. I am looking to create a dashboard that a user pulls up and provides values for certain inputs, takes the values, and produces a result based on a pre-defined algorythm. For (an extremely simple) example: User has to enter: Number of Cars Number of packages Number of People And then there is a formula we have stored somewhere that takes the inputs, weights them, and produces a result (i.e. - "take the blue ferry" vs. "take the white ferry"). It would be nice if we could do this, even though it's very simple, since we're asking our users to spend more time in the tool. TIA!
facing this issue second time, and tried almost every possible way out in last 2 months, so here is the csv file we're which is getting referesh in every 1 hour, ( it may or may not contain new event... See more...
facing this issue second time, and tried almost every possible way out in last 2 months, so here is the csv file we're which is getting referesh in every 1 hour, ( it may or may not contain new events ) We observed after few hours file stop getting into splunk and after splunk restart again it start ingesting data. In the splunkd logs its says ignoring path, I have tried crcSalt, initCrcLenth but non worked in my case All i want splunk is to read new file always no matter is there is new events or not, just stay updated with file ( i cannot add counter in file )
Greetings fellow Splunkers, I was wondering if anyone has figured out what seems the most accurate metric to track when a user logs into windows. not the boot up/Startup time but the time between wh... See more...
Greetings fellow Splunkers, I was wondering if anyone has figured out what seems the most accurate metric to track when a user logs into windows. not the boot up/Startup time but the time between when a user puts in their password and they are able to interact with the desktop. I am not able to see a particular event. Waiting for GPO to complete is not viable since we stream them in the background.  Comparing events between local and AD events might prove useful but we have a significant amount of users that are WFH and they use cached creds until they get on the VPN. Comparing the login between them getting on the VPN would be simpler to get but if they do anything else before they log into the VPN that will throw it off as well..  Appreciate anything thoughts or ideas you fine folks might have. Thank you!
Hi comunity, I have been using a very nice 3d scatterplot in a dashboard in Splunk cloud (8.2). It was working fine.   Now the visualization is broken. It seems like it has been removed. ... See more...
Hi comunity, I have been using a very nice 3d scatterplot in a dashboard in Splunk cloud (8.2). It was working fine.   Now the visualization is broken. It seems like it has been removed. Can someone please confirm if that is the case? Do we have any alternative for it or knows any workaround to have it working? Thanks in advance for any help.  
Hi Splunk community, I am currently having an issue with deploying apps to universal forwards.  On the deployment server side, I have the hosts set up in the whitelist for specific server classes a... See more...
Hi Splunk community, I am currently having an issue with deploying apps to universal forwards.  On the deployment server side, I have the hosts set up in the whitelist for specific server classes and on the UF side, I have a deployment client on the hosts plus they are phoning home into the DS.  We are not receiving logs from these UFs because the app that contains the input.conf for these servers is not getting pushed to the UFs.  Is there a way to force the app to get pushed / am I missing a configuration that is causing this to happen? This has been a recurring problem because the app sporadically gets removed from these servers. Thanks in advance!  
I have a set of results for the search with id="base_metrics_search" which provide 3 panels with data.  The events each contain a bunch of question metrics and have two fields of note: question_id an... See more...
I have a set of results for the search with id="base_metrics_search" which provide 3 panels with data.  The events each contain a bunch of question metrics and have two fields of note: question_id and is_answered, which I'd like to use to provide data for another 2 panels. An example result set would be: question_id is_answered 1 1 2 0 3 1 4 0 5 0 6 1 7 1 8 1 9 0 10 0 11 1 12 0 13 0 14 1 15 1   How do I find the ids of the first 5 answered and unanswered questions? The first 5 of each type could be in any order. I am hoping to use two tokens to pass these values to other panels using a multivalue or comma separated list. So for the above example, I would end up with something like: answered_ids = 1,3,6,7,8 unanswered_ids = 2,4,5,9,10 I have searched around the doc and I haven't figured out what SPL to use to do this. I am currently using a chained search approach using "head" but this gives me results in the first panel but none in the second:  (I'm using splunk enterprise 8.2.2) <panel> <title>Top Questions</title> <table> <title>Answered Questions</title> <search base="general_metrics_base"> <query>| head limit=5 (is_answered=1) | fields ...</query> </search> <option name="drilldown">none</option> <option name="refresh.display">none</option> </table> <table> <title>Unanswered Questions</title> <search base="general_metrics_base"> <query>| head limit=5 (is_answered=0) | fields ...</query> </search> <option name="drilldown">none</option> <option name="refresh.display">none</option> </table> </panel> I'm looking into passing just the question_Ids on since I need to do further querying in those next two panels anyway.  I assume the answered questions search gets rid of events in the base_metrics_search results preventing the unanswered questions panel search from using them. Should the second panel (for the unanswered questions) of a pair of panels, both in a chained search based off the same original search, get the same original results set to process that the base_metrics_search returned to the first panel? Thanks in advance for any help you could offer!
I want to create the new_field when other values of field_1 is less than of first value. Here in below example as 23 greater than other values then do sum of all which is 44. for 10 is the sma... See more...
I want to create the new_field when other values of field_1 is less than of first value. Here in below example as 23 greater than other values then do sum of all which is 44. for 10 is the smallest then its result is 10 for 11 there is one more value is less which is 10 then do sum with 10 then result is 21
When i have query data  result from search in field worker id it show >> domain\worker_id search result Example  ABC\123456  it have domain name front of worker id. If i would like to deleted on... See more...
When i have query data  result from search in field worker id it show >> domain\worker_id search result Example  ABC\123456  it have domain name front of worker id. If i would like to deleted only domain  ABC\  in field and result  show only number of worker id  . Example    ABC\123456   >>> 123456 Please recommend  how to query in search. Best Regards, CR
hello all, My problem is I thing Splunk have max character accepted for stats command, when i perform this search index="bnc_6261_pr_log_conf"  logStreamName="*b6b3-f8d14815eaf8/i-09bfc06d1ff10... See more...
hello all, My problem is I thing Splunk have max character accepted for stats command, when i perform this search index="bnc_6261_pr_log_conf"  logStreamName="*b6b3-f8d14815eaf8/i-09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout" I see 3 event, and now if I perform this request index="bnc_6261_pr_log_conf"  logStreamName="*b6b3-f8d14815eaf8/i-09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout" | eval l = len(message) | stats values(l) as NumberOfCar I received two len, one was lose 172 6277   if I perform this statistic request : index="bnc_6261_pr_log_conf" | logStreamName="*b6b3-f8d14815eaf8/i-09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout" | eval length=len(_raw) | stats max(length) perc95(length) max(linecount) perc95(linecount) recived: Max(Length):29886 perc95(Length):275756   The event I lose have effectively 28973 character,  I thing the actual limit is 10 000.... I already change TRUNCATE parameter at 80 000. It for that I can load event with up to 10 000 character....   My question is, Can I change the stats  limit in splunk for the max characters ? with which parameter ? and where from the web page ? can be change by non admin  and for a  specific source ?   Thank for your future help. Hugues
I want to move my .Net 6 API to a Graviton instance with arm64. Does the agent support it? I don't see any docs about it. I use init container instrumentation, but I see only an amd64 docker image ... See more...
I want to move my .Net 6 API to a Graviton instance with arm64. Does the agent support it? I don't see any docs about it. I use init container instrumentation, but I see only an amd64 docker image for Alpine: \ Is there any other instrumentation process that supports arm64?