All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am looking to upgrade the AppDynamics .Net agent to the latest version on the Windows server during the windows patching activity. There should not be any downtime to restart IIS. So I shoul... See more...
Hi, I am looking to upgrade the AppDynamics .Net agent to the latest version on the Windows server during the windows patching activity. There should not be any downtime to restart IIS. So I should configure all the parameters before the restart time. Please provide us with steps to help do this. Regards, Sushma-
Can we create Splunk Dashboard using python Scripting in Splunk Cloud, if Yes, what is process for the same.
Please checkout the idea here (because I don't think currently it's possible with Splunk unless someone has some workaround or solution that I don't know) - https://ideas.splunk.com/ideas/EID-I-1417 ... See more...
Please checkout the idea here (because I don't think currently it's possible with Splunk unless someone has some workaround or solution that I don't know) - https://ideas.splunk.com/ideas/EID-I-1417   (Coping the same content here, recommend upvoting the idea if you think this is currently not possible with Splunk today.) Does anyone know if it is possible to add metadata field(s) to identify all the Splunk instances that have processed a particular event? Let me explain, for example, I'm collecting WinEventLog from instance1 using UF, which is forwarding the logs to an instance2 which is intermediate UF, that is forwarding to intermediate HF (instance3), which is forwarding the data to Indexer (idx1). instance1 (UF) -> instance2 (I UF) -> instance3 (I HF) -> idx1 (Indexer) I want to see if there is a way to get a meta field (indexed time field) that tells the full sequence of where a particular event has traveled through (only Splunk instances of course). This would be useful in complex environment troubleshooting. Even having this as part of debugging we can enable some parameters that can enable this functionality. I don't think currently it's possible unless someone has some workaround or solution.
Have anyone tried sending data from HF to UF? I know it's a stupid question.    And I know it's not going to work. If someone has tried it before intentionally or by mistake, then I'm curious to kn... See more...
Have anyone tried sending data from HF to UF? I know it's a stupid question.    And I know it's not going to work. If someone has tried it before intentionally or by mistake, then I'm curious to know what happens in this scenario, what error HF will throw, and what error UF will throw.  
Hi, I have a dashboard and I need to limit the view of this dashboard to people with certain IP addresses. Is this possible and, if yes, how? Thanks, Patrick
Hello, I am collecting logs from various endpoints via UFs into a Splunk HF. One of the data inputs is firewall logs via Syslog on port 514.  My question is: Will I have to set Data Input on ... See more...
Hello, I am collecting logs from various endpoints via UFs into a Splunk HF. One of the data inputs is firewall logs via Syslog on port 514.  My question is: Will I have to set Data Input on port 514 on Splunk Cloud too? Or all logs will be globally forwarded on port 9997 which is already set? Thanks!
Hello Guys. We use SNMP Modular input to poll data from the devices. We use CISCO, added CISCO MIBs, then added IF-Mib. And after adding IF-MIB file on heavy forwarder in smp_ta folder, on ad hoc ... See more...
Hello Guys. We use SNMP Modular input to poll data from the devices. We use CISCO, added CISCO MIBs, then added IF-Mib. And after adding IF-MIB file on heavy forwarder in smp_ta folder, on ad hoc search head  we see only 6 fields from IF-MIB (ifDescr_1,... ifDescr_6). When we do snmpwalk from heavy forwarder to the device, we see 87 fields with interfaces names. Seems there will be limits.conf, but maybe you had the  same situation ?
Hi All,   I have two sourcetypes in the same index, however the fields names are different but the value is same for the Email address of a user .   But yet when i do a coalesce or use |where clause... See more...
Hi All,   I have two sourcetypes in the same index, however the fields names are different but the value is same for the Email address of a user .   But yet when i do a coalesce or use |where clause,  splunk shows "No results found"  For example: Sourcetype s1 contains email field while s2 contains user_email field. Both fields have same value:  john_smith@domain.com   index=xx (sourcetype=s1 OR sourcetype=s2) (email=* OR user_email=*) | eval user_id = coalesce(email, user_email) OR | index=xx (sourcetype=s1 OR sourcetype=s2) | where email=user_email   Result:  No results found. I am following whatever is  mentioned in https://community.splunk.com/t5/Splunk-Search/merge-two-sourcetypes-that-have-the-same-data-but-different/m-p/493244,  but yet in my case it shows 0 Result matches. Any idea what can be the issue ?  Is the @ sign or "." (dot) in the email id creating a problem ?
After Running the Query there is no option for Timestamp > Choose Column >  "No result" Kindly see Image.  Since I'm going to use the timestamp column well "Timestamp" currently beginn... See more...
After Running the Query there is no option for Timestamp > Choose Column >  "No result" Kindly see Image.  Since I'm going to use the timestamp column well "Timestamp" currently beginner here 
Hi Team, when I use curl - I am able to get the output in JSON format.  But when I am trying to use requests module, I am getting json decode error       import requests import json from request... See more...
Hi Team, when I use curl - I am able to get the output in JSON format.  But when I am trying to use requests module, I am getting json decode error       import requests import json from requests.auth import HTTPBasicAuth search_query={'search':'search earliest = -24h index=* userid=abc123'} requests.post('https://1.1.1.1/services/search/jobs/export, data=search_query,verify=False, auth=HTTPBasicAuth('admin', 'pass')) print("Status = "+str(response.status_code) ) print("response text = "+str(response.text)) json_data = json.loads(str(response.text)) json.dump(json_obj)      
Hello All, We recently upgraded from 7.3. to 8.1. We had a few inputs in dbconnect that was upgraded from 3.1 to 3.8. the inputs are migrated to the upgraded app but we are unable to see them in th... See more...
Hello All, We recently upgraded from 7.3. to 8.1. We had a few inputs in dbconnect that was upgraded from 3.1 to 3.8. the inputs are migrated to the upgraded app but we are unable to see them in the GUI. The connections are showing up but none of the inputs are visible in GUI. Can someone please help/advise what needs to be done. Thanks
I have two queries index="gtw-ilb" /v1/platform/change_indicators host="*dev01*"| search sourcetype="nginx:plus:access" |eval env = mvindex(split(host, "-"), 1) | convert num(status) as response_c... See more...
I have two queries index="gtw-ilb" /v1/platform/change_indicators host="*dev01*"| search sourcetype="nginx:plus:access" |eval env = mvindex(split(host, "-"), 1) | convert num(status) as response_code | eval env = mvindex(split(host, "-"), 1) |eval tenant=split(access_request, "tenantId=")| eval tenant=mvindex(tenant, 1) | eval tenant=split(tenant, "&") | eval tenant=mvindex(tenant, 0) | stats count(eval(like(response_code,"%%%"))) AS total_request count(eval(like(response_code,"4%%"))) AS error_request4 count(eval(like(response_code,"5%%"))) AS error_request5 by tenant | eval pass_percent = round(100-((error_request4+error_request5)/total_request*100),2) | where total_request >1 | table tenant, pass_percent, total_request | sort -pass_percent limit=3   And index="gtw-ilb" /v1/platform/change_indicators host="*dev01*"| search sourcetype="nginx:plus:access" |eval env = mvindex(split(host, "-"), 1) | convert num(status) as response_code | eval env = mvindex(split(host, "-"), 1) |eval tenant=split(access_request, "tenantId=")| eval tenant=mvindex(tenant, 1) | eval tenant=split(tenant, "&") | eval tenant=mvindex(tenant, 0) | stats count(eval(like(response_code,"%%%"))) AS total_request count(eval(like(response_code,"4%%"))) AS error_request4 count(eval(like(response_code,"5%%"))) AS error_request5 by tenant | eval pass_percent = round(100-((error_request4+error_request5)/total_request*100),2) | where total_request >1 | table tenant, pass_percent, total_request | sort -total_request limit=10   These 2 queries have 90% search criteria common except sorting by column I want to union of two in one query and extract even duplicate result, what will be that one query please?
Im trying to nullified  data in "status" field  for any value match as "InActive" based on accounttype . Appreciate help on appropriate SPL  Thanks accounttype                status           count ... See more...
Im trying to nullified  data in "status" field  for any value match as "InActive" based on accounttype . Appreciate help on appropriate SPL  Thanks accounttype                status           count Human_Account       Active            1333 Human_Account       InActive          106 Generic_Account     Active                50 Service_Account      InActive          540
Hi Experts! I am trying to REPLACE the join command to the stats command because the subsearch result exceeds 50000. However, I don't know how to replace the join command to the stats command becau... See more...
Hi Experts! I am trying to REPLACE the join command to the stats command because the subsearch result exceeds 50000. However, I don't know how to replace the join command to the stats command because this spl uses join twice. Can you please advise? Thanks in advance!! index=myindex sourcetype=A  LogicalName="new_endpoiint" | join left=L right=R where L.new_contract.Name = R.new_contract_code [ search index=myindex sourcetype=A LogicalName="new_contract" ] | join left=L2 right=R2 where L2.R.new_circuit.Name = R2.new_circuit_code [ search index=cmdb sourcetype=A LogicalName="new_circuit" ]
Hi All, I am getting below error in the HF logs and not able to see any latest events in the SH.   ERROR HttpInputDataHandler - Failed processing http input, token name=n/a, channel=n/a, source... See more...
Hi All, I am getting below error in the HF logs and not able to see any latest events in the SH.   ERROR HttpInputDataHandler - Failed processing http input, token name=n/a, channel=n/a, source_IP=XX.XX.XX.XX, reply=4, events_ processed=0, http_input_body_size=2142, parsing_err=""   Kindly assist me on this issue
Hello, Has anyone ran into an issue where the AppLocker data from Splunk logs is showing the SID information instead of the userID?  Any help on how to fix this would be greatly appreciated! 
I have a set of long-running processes that are occasionally restarted. They generate a set of "heartbeat" events where only the timestamp of the event changes, but otherwise the same data is repeate... See more...
I have a set of long-running processes that are occasionally restarted. They generate a set of "heartbeat" events where only the timestamp of the event changes, but otherwise the same data is repeated. Occasionally they encounter an interesting event and log a bunch of dynamic data, then go back to the "heartbeat" events. The log files start off very similar and very short, but do eventually grow (not too large; < 1mb each). A new log file is started whenever the process restarts, but otherwise the process will use the same log file until it terminates. It seems like Splunk is great at reading some of the files, but other files it completely ignores. I checked splunkd.log and found this error message matching one of the missing files:       04-06-2022 10:23:49.155 -0700 ERROR TailReader [19680 tailreader0] - File will not be read, is too small to match seekptr checksum (file=...). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.       props.conf:       [custom_json] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6QZ category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true TZ = UTC JSON_TRIM_BRACES_IN_ARRAY_NAMES = true       inputs.conf       [monitor://c:\...\logs] disabled = false host = MM-IRV-NB33 sourcetype = custom_json crcSalt = <SOURCE>     I suspect what happened is that TailReader registered an error on this file which may have been legitimate if the file was too small, but then the error was never cleared and so even though the file grew it would never again be touched by Splunk. Does that sound right? If so, how do I 1) prevent this error from happening again and 2) clear the error so that my existing files can be read into Splunk?  
I have created a scripted input (/opt/splunk/etc/apps/mytestapp/bin/scriptedinput1.sh) to run against my kubetools instances that simply runs a kubectl command and outputs json:     #!/bin/sh ... See more...
I have created a scripted input (/opt/splunk/etc/apps/mytestapp/bin/scriptedinput1.sh) to run against my kubetools instances that simply runs a kubectl command and outputs json:     #!/bin/sh /usr/bin/kubectl -n mynamespace1 get deployments,statefulsets -o json     However, after I go to set up the scripted input in the Data inputs section of the Splunk console and run a search, I'm seeing this error in splunkd.log:     ERROR ExecProcessor - message from "/opt/splunk/etc/apps/mytestapp/bin/scriptedinput1.sh" /opt/splunk/etc/apps/mytestapp/bin/scriptedinput1.sh: line 3: /usr/bin/kubectl: No such file or directory     I suspect it's throwing this error because kubetools is only installed on the kubetools instances and not the splunksearch instance.  Is there any way to run scripted inputs with commands in them that aren't installed on the splunksearch instance?  If not, what alternative solution would be recommended?  Any assistance with solving this would be greatly appreciated.
In Splunk add-on for VMware metrics configuration page, I get DCN Credential Validation as "invalid" after giving my master URI. in the format: https://<servername>:8089 Following is the error in dc... See more...
In Splunk add-on for VMware metrics configuration page, I get DCN Credential Validation as "invalid" after giving my master URI. in the format: https://<servername>:8089 Following is the error in dcn_configuration.log: 2022-04-11 22:24:24,035 INFO [ConfigSetUp] getting credentials stanza node:https://<servername>:8089 currently being edited. 2022-04-11 22:24:24,086 INFO [ConfigSetUp] SSL certificate validation disabled for collection configuration 2022-04-11 22:24:54,127 ERROR [ConfigSetUp] [pool=Global pool]Could not find splunkd on node=https://<servername>:8089 2022-04-11 22:24:54,173 INFO [ConfigSetUp] [pool=Global pool]Updated the conf modification time property for type: node and pool: Global pool 2022-04-11 22:24:54,173 INFO [ConfigSetUp] [pool=Global pool]Node stanza: https://<servername>:8089 edited successfully. I am trying to configure my HF as DCN.    Hope someone can help me in this situation. Thanks. 
Dear All. I have a problem with the number of files opened by Linux, I have set the ulimit parameter to 999999 but I still have Splunk service crashes due to file descriptors,  this happens in the S... See more...
Dear All. I have a problem with the number of files opened by Linux, I have set the ulimit parameter to 999999 but I still have Splunk service crashes due to file descriptors,  this happens in the Search heads, is there a way to tell Splunk not to open more files ?, I have tried with [inputproc] max_fd = 120000 but it keeps opening many more files The Linux version is Oracle Linux Server release 7.8