All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, when I use curl - I am able to get the output in JSON format.  But when I am trying to use requests module, I am getting json decode error       import requests import json from request... See more...
Hi Team, when I use curl - I am able to get the output in JSON format.  But when I am trying to use requests module, I am getting json decode error       import requests import json from requests.auth import HTTPBasicAuth search_query={'search':'search earliest = -24h index=* userid=abc123'} requests.post('https://1.1.1.1/services/search/jobs/export, data=search_query,verify=False, auth=HTTPBasicAuth('admin', 'pass')) print("Status = "+str(response.status_code) ) print("response text = "+str(response.text)) json_data = json.loads(str(response.text)) json.dump(json_obj)      
Hello All, We recently upgraded from 7.3. to 8.1. We had a few inputs in dbconnect that was upgraded from 3.1 to 3.8. the inputs are migrated to the upgraded app but we are unable to see them in th... See more...
Hello All, We recently upgraded from 7.3. to 8.1. We had a few inputs in dbconnect that was upgraded from 3.1 to 3.8. the inputs are migrated to the upgraded app but we are unable to see them in the GUI. The connections are showing up but none of the inputs are visible in GUI. Can someone please help/advise what needs to be done. Thanks
I have two queries index="gtw-ilb" /v1/platform/change_indicators host="*dev01*"| search sourcetype="nginx:plus:access" |eval env = mvindex(split(host, "-"), 1) | convert num(status) as response_c... See more...
I have two queries index="gtw-ilb" /v1/platform/change_indicators host="*dev01*"| search sourcetype="nginx:plus:access" |eval env = mvindex(split(host, "-"), 1) | convert num(status) as response_code | eval env = mvindex(split(host, "-"), 1) |eval tenant=split(access_request, "tenantId=")| eval tenant=mvindex(tenant, 1) | eval tenant=split(tenant, "&") | eval tenant=mvindex(tenant, 0) | stats count(eval(like(response_code,"%%%"))) AS total_request count(eval(like(response_code,"4%%"))) AS error_request4 count(eval(like(response_code,"5%%"))) AS error_request5 by tenant | eval pass_percent = round(100-((error_request4+error_request5)/total_request*100),2) | where total_request >1 | table tenant, pass_percent, total_request | sort -pass_percent limit=3   And index="gtw-ilb" /v1/platform/change_indicators host="*dev01*"| search sourcetype="nginx:plus:access" |eval env = mvindex(split(host, "-"), 1) | convert num(status) as response_code | eval env = mvindex(split(host, "-"), 1) |eval tenant=split(access_request, "tenantId=")| eval tenant=mvindex(tenant, 1) | eval tenant=split(tenant, "&") | eval tenant=mvindex(tenant, 0) | stats count(eval(like(response_code,"%%%"))) AS total_request count(eval(like(response_code,"4%%"))) AS error_request4 count(eval(like(response_code,"5%%"))) AS error_request5 by tenant | eval pass_percent = round(100-((error_request4+error_request5)/total_request*100),2) | where total_request >1 | table tenant, pass_percent, total_request | sort -total_request limit=10   These 2 queries have 90% search criteria common except sorting by column I want to union of two in one query and extract even duplicate result, what will be that one query please?
Im trying to nullified  data in "status" field  for any value match as "InActive" based on accounttype . Appreciate help on appropriate SPL  Thanks accounttype                status           count ... See more...
Im trying to nullified  data in "status" field  for any value match as "InActive" based on accounttype . Appreciate help on appropriate SPL  Thanks accounttype                status           count Human_Account       Active            1333 Human_Account       InActive          106 Generic_Account     Active                50 Service_Account      InActive          540
Hi Experts! I am trying to REPLACE the join command to the stats command because the subsearch result exceeds 50000. However, I don't know how to replace the join command to the stats command becau... See more...
Hi Experts! I am trying to REPLACE the join command to the stats command because the subsearch result exceeds 50000. However, I don't know how to replace the join command to the stats command because this spl uses join twice. Can you please advise? Thanks in advance!! index=myindex sourcetype=A  LogicalName="new_endpoiint" | join left=L right=R where L.new_contract.Name = R.new_contract_code [ search index=myindex sourcetype=A LogicalName="new_contract" ] | join left=L2 right=R2 where L2.R.new_circuit.Name = R2.new_circuit_code [ search index=cmdb sourcetype=A LogicalName="new_circuit" ]
Hi All, I am getting below error in the HF logs and not able to see any latest events in the SH.   ERROR HttpInputDataHandler - Failed processing http input, token name=n/a, channel=n/a, source... See more...
Hi All, I am getting below error in the HF logs and not able to see any latest events in the SH.   ERROR HttpInputDataHandler - Failed processing http input, token name=n/a, channel=n/a, source_IP=XX.XX.XX.XX, reply=4, events_ processed=0, http_input_body_size=2142, parsing_err=""   Kindly assist me on this issue
Hello, Has anyone ran into an issue where the AppLocker data from Splunk logs is showing the SID information instead of the userID?  Any help on how to fix this would be greatly appreciated! 
I have a set of long-running processes that are occasionally restarted. They generate a set of "heartbeat" events where only the timestamp of the event changes, but otherwise the same data is repeate... See more...
I have a set of long-running processes that are occasionally restarted. They generate a set of "heartbeat" events where only the timestamp of the event changes, but otherwise the same data is repeated. Occasionally they encounter an interesting event and log a bunch of dynamic data, then go back to the "heartbeat" events. The log files start off very similar and very short, but do eventually grow (not too large; < 1mb each). A new log file is started whenever the process restarts, but otherwise the process will use the same log file until it terminates. It seems like Splunk is great at reading some of the files, but other files it completely ignores. I checked splunkd.log and found this error message matching one of the missing files:       04-06-2022 10:23:49.155 -0700 ERROR TailReader [19680 tailreader0] - File will not be read, is too small to match seekptr checksum (file=...). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.       props.conf:       [custom_json] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6QZ category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true TZ = UTC JSON_TRIM_BRACES_IN_ARRAY_NAMES = true       inputs.conf       [monitor://c:\...\logs] disabled = false host = MM-IRV-NB33 sourcetype = custom_json crcSalt = <SOURCE>     I suspect what happened is that TailReader registered an error on this file which may have been legitimate if the file was too small, but then the error was never cleared and so even though the file grew it would never again be touched by Splunk. Does that sound right? If so, how do I 1) prevent this error from happening again and 2) clear the error so that my existing files can be read into Splunk?  
I have created a scripted input (/opt/splunk/etc/apps/mytestapp/bin/scriptedinput1.sh) to run against my kubetools instances that simply runs a kubectl command and outputs json:     #!/bin/sh ... See more...
I have created a scripted input (/opt/splunk/etc/apps/mytestapp/bin/scriptedinput1.sh) to run against my kubetools instances that simply runs a kubectl command and outputs json:     #!/bin/sh /usr/bin/kubectl -n mynamespace1 get deployments,statefulsets -o json     However, after I go to set up the scripted input in the Data inputs section of the Splunk console and run a search, I'm seeing this error in splunkd.log:     ERROR ExecProcessor - message from "/opt/splunk/etc/apps/mytestapp/bin/scriptedinput1.sh" /opt/splunk/etc/apps/mytestapp/bin/scriptedinput1.sh: line 3: /usr/bin/kubectl: No such file or directory     I suspect it's throwing this error because kubetools is only installed on the kubetools instances and not the splunksearch instance.  Is there any way to run scripted inputs with commands in them that aren't installed on the splunksearch instance?  If not, what alternative solution would be recommended?  Any assistance with solving this would be greatly appreciated.
In Splunk add-on for VMware metrics configuration page, I get DCN Credential Validation as "invalid" after giving my master URI. in the format: https://<servername>:8089 Following is the error in dc... See more...
In Splunk add-on for VMware metrics configuration page, I get DCN Credential Validation as "invalid" after giving my master URI. in the format: https://<servername>:8089 Following is the error in dcn_configuration.log: 2022-04-11 22:24:24,035 INFO [ConfigSetUp] getting credentials stanza node:https://<servername>:8089 currently being edited. 2022-04-11 22:24:24,086 INFO [ConfigSetUp] SSL certificate validation disabled for collection configuration 2022-04-11 22:24:54,127 ERROR [ConfigSetUp] [pool=Global pool]Could not find splunkd on node=https://<servername>:8089 2022-04-11 22:24:54,173 INFO [ConfigSetUp] [pool=Global pool]Updated the conf modification time property for type: node and pool: Global pool 2022-04-11 22:24:54,173 INFO [ConfigSetUp] [pool=Global pool]Node stanza: https://<servername>:8089 edited successfully. I am trying to configure my HF as DCN.    Hope someone can help me in this situation. Thanks. 
Dear All. I have a problem with the number of files opened by Linux, I have set the ulimit parameter to 999999 but I still have Splunk service crashes due to file descriptors,  this happens in the S... See more...
Dear All. I have a problem with the number of files opened by Linux, I have set the ulimit parameter to 999999 but I still have Splunk service crashes due to file descriptors,  this happens in the Search heads, is there a way to tell Splunk not to open more files ?, I have tried with [inputproc] max_fd = 120000 but it keeps opening many more files The Linux version is Oracle Linux Server release 7.8  
I would like to be able to limit the 'All' option to what my query actually returns and not for * entries for targetAppAlternateId.   <form theme="dark"> <label>Logins</label> <fieldset submi... See more...
I would like to be able to limit the 'All' option to what my query actually returns and not for * entries for targetAppAlternateId.   <form theme="dark"> <label>Logins</label> <fieldset submitButton="false"> <input type="dropdown" token="myApp"> <label>Application:</label> <fieldForLabel>targetAppAlternateId</fieldForLabel> <fieldForValue>targetAppAlternateId</fieldForValue> <search> <query>index=myIndex targetAppAlternateId="App1.*" OR targetAppAlternateId="App2" | dedup targetAppAlternateId | sort by targetAppAlternateId</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <choice value="*">All</choice> </input>   Any help would be greatly appreciated. 
Hi, I am new to splunk. Currently using this query to get the count index=* SrcCountry=* | stats count by SrcCountry.  If I wanted to narrow down the search to one country and get a week by wee... See more...
Hi, I am new to splunk. Currently using this query to get the count index=* SrcCountry=* | stats count by SrcCountry.  If I wanted to narrow down the search to one country and get a week by week comparison of the count what kind of query could I use?
Sounds easy, eh? I've been using Splunk since v3 -- and I've setup forwarding for servers dozens of times, and migrated countless indexes, but this one is kicking my butt. I have a stand-alone Splu... See more...
Sounds easy, eh? I've been using Splunk since v3 -- and I've setup forwarding for servers dozens of times, and migrated countless indexes, but this one is kicking my butt. I have a stand-alone Splunk server (Enterprise) that's been ingesting data for years in the form of CSV files and providing a front end for analysts. I need to decommission that box and get the data into our main cluster. I setup forwarding from the stand-alone server to feed into a heavy forwarder (that has a thousand other hosts feeding into it) and then into the cluster. It's working insomuch as it forwarded data but only from the last CSV file (back to March 17th, FWIW). I can't simply copy the files into a new index because of the cluster, and I no longer have the previous CSV files to re-ingest (going back to 2009). I've tried clearing the fishbucket hoping to force it to resend everything it knows. It's feeding into an index of the same name. No errors in splunkd.log... Thoughts? Thanks! Michael
Gentlemen My raw events have a field called login_time which has values of format ( 2022-04-11 10:52:08 ) .  This is the time an user logs in to the system.  There is no logout_time field in raw d... See more...
Gentlemen My raw events have a field called login_time which has values of format ( 2022-04-11 10:52:08 ) .  This is the time an user logs in to the system.  There is no logout_time field in raw data.  Now, the requirement is to track all activities done by the user starting from login_time and ending with login_time + 8 hours.  1)   How do i add this 8 hours to the login_time in my search ? Do i create an eval function something like eval logout_time = login time + 8:00:00 ?  2) Transaction works with strings in startswith and endswith.  Can it be used to track time which is in  numerical format  as  shown in below query ?    If not, how else to group all events done by the user within the login and logout time ?         index=xxxx transaction startswith ="2022-04-11 10:52:08" endswith="2022-04-11 10:52:08 + 8 hrs" | stats .... by user         Hope i am clear
Hello. Good afternoon.  Looking for some best practices here.  Over the years, we have been using the UF to ingest Windows data.  This is a reliable solution for ensuring the Windows events are bei... See more...
Hello. Good afternoon.  Looking for some best practices here.  Over the years, we have been using the UF to ingest Windows data.  This is a reliable solution for ensuring the Windows events are being ingested into Splunk indexes.  We now have a new solution called Crowdstrike which seems to also ingest Windows events as well.  Based on the experience from the Splunk Community, can anyone share their experiences (or best practices)?  We would like to have a reliable Windows solution but refrain from having duplicate data. Regards, Max  
Hello, I'm trying to find a way to fetch/get the HEC host and port of Splunk instance using Javascript SDK in the frontent, but I could not find any source of information that allows me to do such th... See more...
Hello, I'm trying to find a way to fetch/get the HEC host and port of Splunk instance using Javascript SDK in the frontent, but I could not find any source of information that allows me to do such thing...   Any one to help?   Thanks
Hi, Does AppD support PowerBuilder ^ Post edited by @Ryan.Paredez to split the post into a new conversation and improve the title for Searchability. 
I have 2 searches and I want to link 2 together in one table. The first search:   index=very_big_index caseNumber=1234567799 | table _time Name caseNumber UID phone.   This displays the followi... See more...
I have 2 searches and I want to link 2 together in one table. The first search:   index=very_big_index caseNumber=1234567799 | table _time Name caseNumber UID phone.   This displays the following as expected, but the phone field is blank: _time Name caseNumber UID phone 11APR2022 John Smith 1234567799 111222333444555666777     The second search with the UID yields the phone number but nothing else:   index=very_big_index 111222333444555666777 | stats values(phone) as phone   results: phone 123-555-1234   How can I efficiently link these 2 searches together using the common field name/value of UID/111222333444555666777
Hi,  I habe three panels in one row. Since Panel A and B have less information and are 'think', it looks weird together with the 'thick' panel C in a row. I would like to stack A and B Panels toget... See more...
Hi,  I habe three panels in one row. Since Panel A and B have less information and are 'think', it looks weird together with the 'thick' panel C in a row. I would like to stack A and B Panels together and then put them next to C. How shall I realize that? Many thanks for help!