All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, I have a requirement on the dropdowns, I have a following lookup file which contains application, environment and index details, I need to get the environment details related to each appl... See more...
Hello All, I have a requirement on the dropdowns, I have a following lookup file which contains application, environment and index details, I need to get the environment details related to each application when i choose app details from the dropdown, similarly with the index dropdown, it must only give the index details based on the values that i choose in the application and environment dropdowns. I could get the desired results while using the lookup file. But how can this be achieved using eval condition in the splunk dashboard rather than using the lookup file. I have the values of the fields in the splunk results. application environment index app_a DEV aws-app_a_npd app_a PPR aws-app_a_ppr app_a TEST aws-app_a_test app_a SUP aws-app_a_sup app_a PROD aws-app_a_prod app_b NPD aws-app_b_npd app_b SUP aws-app_b_sup app_b PROD aws-app_b_prod app_c NPD aws-app_c_npd app_c SUP aws-app_c_sup app_c PROD aws-app_c_prod
Hi peeps, I receive below error while running a query. below is my query; eventtype=sfdc-login-history | iplocation allfields=true SourceIp | eval cur_t=_time | streamstats current=t win... See more...
Hi peeps, I receive below error while running a query. below is my query; eventtype=sfdc-login-history | iplocation allfields=true SourceIp | eval cur_t=_time | streamstats current=t window=2 first(lat) as prev_lat first(lon) as prev_lon first(cur_t) as prev_t by Username | eval time_diff=cur_t - prev_t | distance outputField=distance inputFieldlat1=lat inputFieldLat2=prev_lat inputfieldLon1=lon inputFieldLon2=prev_lon | eval time_diff=-1*time_diff | eval ratio = distances3600/time_diff | where ratio> 500  | geostats latfield=lat longfield=lon count by Application  
Hello Everyone, I am just bringing up splunk within our environment, so a lot of functions are still new. I am trying to use my windows event data to update users ID on panorama, however, running t... See more...
Hello Everyone, I am just bringing up splunk within our environment, so a lot of functions are still new. I am trying to use my windows event data to update users ID on panorama, however, running the below query in my es environment returns the error : External search command 'panuserupdate' returned error code 2. Script output = "ERROR Unable to get apikey from firewall: local variable 'username' referenced before assignment ". The Query index=wineventlog host=xxxxxx | mvexpand Security_ID | mvexpand Source_Network_Address | dedup Security_ID Source_Network_Address | search Security_ID!="NULL SID" | rename Security_ID as user | rename Source_Network_Address as src_ip | panuserupdate panorama=x.x.x.x serial=000000000000 | fields user src_ip Brief overview of my data ingestion: Panorama syslog is ingested to splunk cloud through Heavy forwarder. Palo Alto Add on for Splunk is installed on both HF and Splunk Cloud also, but no data is showing on the app. Every data is 0 0.  Also I do have a user account in Panorama with api permissions.
Hello Team, Pre staging environment (not production), a single server with 12 CPU + 24 GB or memory + raid0 nvme (2.5GB/s write, 5GB/s read). All in one deployment (SH + indexer). CPU cores with HT ... See more...
Hello Team, Pre staging environment (not production), a single server with 12 CPU + 24 GB or memory + raid0 nvme (2.5GB/s write, 5GB/s read). All in one deployment (SH + indexer). CPU cores with HT on dedicated server (6 cores with HT = 12 CPU -> but not used by any other VM). Splunk 9.1.1 and ES 7.1.1. Fresh install. NO data ingested (0 events in most of the indexes including main, notable, risk etc...) - so basically no data yet to be processed. Default ES configuration, i have not yet tuned any correlation searches etc. Defaults. And already performance problems: 1. MC Scheduler Activity Instance showing 22% skipped. 2. ESX reporting minimal CPU usage (the same with memory): 3. MC showing more details, many different Accelerated DM tasks are skipped, all the time: Questions: 1. obviously the first recommendation would be to disable many of correlation searches/accelerated DMs, but that not what i would like do because the aim is to test complete ES functionality (by generating a small number of different types of events). Why do i have those problems in a first place ? I can see that all the tasks are very short, finishes in 1 second, just few takes several seconds. And that is expected since i have 0 events everywhere and i do always expect to have a small number of events on this test deployment. What should i do to tune it and make sure there are no problems with skipped jobs ? Shall i increase  max_searches_per_cpu base_max_searches  Any other ideas ? Overall that seems weird, 
Hello Team, I have my training Splunk instance with ES 7.1. I have used it for training and experiments. Question: is there any "training/example" data i could use to import it there ? The point is... See more...
Hello Team, I have my training Splunk instance with ES 7.1. I have used it for training and experiments. Question: is there any "training/example" data i could use to import it there ? The point is: i do not want to configure dozens of TA's/collectors, create real labs with real cybersecurity attacks, instead would love to have a test data so i could learn/experiment/review Splunk ES capabilities. Anything ?
To rebalance data, splunk only offers you to rebalance one single index or all the indexes but there is no option to provide a list of them. I've created the following shell script to do this and I h... See more...
To rebalance data, splunk only offers you to rebalance one single index or all the indexes but there is no option to provide a list of them. I've created the following shell script to do this and I hope it will help the community. To use it: Create an empty directory and copy the script there Change the parameters at the beginning of the script to correspond to your platform Create a file named ~/.creds containing MY_CREDS="admin:password" (you can replace admin and password with any user account with admin privileges) Create a file in the same directory as the script named indexes.conf with a list of indexes you want to rebalance (one per line) Launch the script and provide a value for the timeout in minutes ("-m" switch;  e.g. "./rebalance.sh -m 2880") Tip: if you access your manager node using ssh, consider using "screen" to keep your session open for a long period of time.   #!/bin/sh HOST=my_manager_node.example.com PORT=8089 source $HOME/.creds CURL_OPTS="-su ${MY_CREDS} --write-out HTTPSTATUS:%{http_code}" INDEXES_FILE="indexes.conf" CONFIG_CMD="/usr/bin/curl ${CURL_OPTS} https://${HOST}:${PORT}/services/cluster/config/config -d rebalance_threshold=0.9" /cluster/manager/control/control/rebalance_buckets -d action=start" START_CMD="/usr/bin/curl ${CURL_OPTS} https://${HOST}:${PORT}/services/cluster/manager/control/control/rebalance_buckets -d action=start" configuration endpoint or by normal endpoint with action=start MAXRUNTIME_OPT="-d max_time_in_min=" INDEX_OPT="-d index=" STOP_CMD="/usr/bin/curl ${CURL_OPTS} https://${HOST}:${PORT}/services/cluster/manager/control/control/rebalance_buckets -d action=stop" STATUS_CMD="/usr/bin/curl ${CURL_OPTS} https://${HOST}:${PORT}/services/cluster/manager/control/control/rebalance_buckets -d action=status" LOG="rebalance.log" MAXRUNTIME="1" while getopts ":m:" opt; do case $opt in m) MAXRUNTIME=${OPTARG} ;; echo "Missing value for argument -$OPTARG" exit 1 ;; ?) echo "Unknown option -$OPTARG" exit 1 ;; esac done if [[ "$OPTIND" -ne "3" ]]; then echo "Please specify timeout in minute with -m" exit fi echo -n "Starting $0: " >>$LOG echo $(date) >>$LOG [[ -f $INDEXES_FILE ]] || exit echo "Configuring rebalancing" echo "Configuring rebalancing" >>$LOG EPOCH=$(date +"%s") MAX_EPOCH=$(( EPOCH + ( 60 * MAXRUNTIME ) )) echo "Will end at EPOCH: $MAX_EPOCH" >>$LOG HTTP_RESPONSE=$($CONFIG_CMD) HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') if [[ ! $HTTP_STATUS == "200" ]]; then echo "HTTP status: $HTTP_STATUS" echo "HTTP body: $HTTP_BODY" exit fi RUNNING=0 for INDEX in `cat $INDEXES_FILE`; do EPOCH=$(date +"%s") MINS_REMAINING=$(( ( MAX_EPOCH - EPOCH ) / 60 )) if [[ "$MINS_REMAINING" -le "0" ]]; then echo "Timout reached" echo "Timout reached" >>$LOG exit fi echo "Rebalancing $INDEX" echo "Rebalancing $INDEX" >>$LOG echo "Remaining time: $MINS_REMAINING" >>$LOG echo $(date) >>$LOG #HTTP_RESPONSE=$(${START_CMD} ${INDEX_OPT}${INDEX}) HTTP_RESPONSE=$(${START_CMD} ${INDEX_OPT}${INDEX} ${MAXRUNTIME_OPT}${MINS_REMAINING}) #HTTP_RESPONSE=200 HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') echo "HTTP status: $HTTP_STATUS" >>$LOG echo "HTTP body: $HTTP_BODY" >>$LOG if [[ ! $HTTP_STATUS == "200" ]]; then echo "HTTP status: $HTTP_STATUS" echo "HTTP body: $HTTP_BODY" exit fi RUNNING=1 WAS_RUNNING=0 sleep 1 while [[ $RUNNING == 1 ]]; do HTTP_RESPONSE=$($STATUS_CMD) HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') if [[ ! $HTTP_STATUS == "200" ]]; then echo "HTTP status: $HTTP_STATUS" echo "HTTP body: $HTTP_BODY" exit fi echo "$HTTP_BODY" | grep "Data rebalance is not running" >/dev/null if [ $? -eq 0 ]; then RUNNING=0 else WAS_RUNNING=1 RUNNING=1 echo -n "." sleep 1 fi done if [[ $WAS_RUNNING == 1 ]]; then # We need a CR after the last echo -n "." echo fi done  
Hi Guys, need answers for below information which is relate to only Splunk application   Path/Query/log file/Index 1. Authentication       Conditional control not met       Disabled/... See more...
Hi Guys, need answers for below information which is relate to only Splunk application   Path/Query/log file/Index 1. Authentication       Conditional control not met       Disabled/ Locked account       Expired token/certificate       Failed logins       Invalid Credentials       MFA check failed       Successful logins       When someone is elevating their permissions and accessing mail of         another user   2. Authorization        Failed authorization       Resource does not exist       Successful authorization       User not authorized   3. Change       Add Integrated Application / Service Configuration       Change to authentication scheme       Change to IP allow list       Change to MFA requirements       Changes to Authentication or Authorization methods       Remove Integrated Application / Service Configuration   4. User Management       Add Group to Group       Add User or Group       Create certificate       Create Role       Create token       Create User       Delete Role       Delete User       Diable token       Elevate User to priviledged status       Remove Group to group       Remove user from priviledged status       Remove User or Group       Revoke certificate       Revoke Role       Revoke User       Update Role       Update User   5. Access       User accessing a sensitive data object       User accessing multiple sensitive data objects       User action performed with sensitive data   6 Jobs and activity       Activity / Performance Indicators       Debug logs       System Errors/Warnings       System Power Down       System Power Up       System Service Start       System Service Stop  
Hello, Im trying to use the data from one search in another search.   This is what I'm trying to do:- index=index_example  sourcetype=sourcetype_example |chart count by Status |eval output=(count... See more...
Hello, Im trying to use the data from one search in another search.   This is what I'm trying to do:- index=index_example  sourcetype=sourcetype_example |chart count by Status |eval output=(count/ [index=index_example  sourcetype=sourcetype_example |stats count] *100) In the eval I just want [index=index_example  sourcetype=sourcetype_example |stats count] to be a single whole number
I have created the graph on hourly basis so it will display counts on the bar based on hours . Now my Requirement is I wanna Display one message on the Graph Totalcount=X 
Hi, I am trying to determine how to see what alerts are using specific indexes in Splunk?  Is there a way to search that? So if I wanted to see all alerts that are using index=firewall, for example,... See more...
Hi, I am trying to determine how to see what alerts are using specific indexes in Splunk?  Is there a way to search that? So if I wanted to see all alerts that are using index=firewall, for example, how would I get that?
Hello,  I am searching to get results for each hour  top 1 max URL hits.  Iam using the below search but not getting results for each hour. index=*  | fields Request_URL _time | stats count as hits... See more...
Hello,  I am searching to get results for each hour  top 1 max URL hits.  Iam using the below search but not getting results for each hour. index=*  | fields Request_URL _time | stats count as hits by Request_URL _time |bucket span=1h _time | sort by hits desc | head 1 Thanks in advance!
I am looking for a query that can help me list or audit systems that are using default passwords or any other method you think I can use to audit my environment for default passwords.
Let's say im running a search where I want to look at domains traveled to. index=web_traffic sourcetype=domains domain IN ("*.com", "*.org*", "*.edu*") I want to do a count on how domains that have... See more...
Let's say im running a search where I want to look at domains traveled to. index=web_traffic sourcetype=domains domain IN ("*.com", "*.org*", "*.edu*") I want to do a count on how domains that have appeared less than 5 times over the search period. How can I accomplish this? I know I could do a stats count by domain but after that, I'm unable to grab the rest of the results in the index such as time, etc.  
I am looking to create a chart showing the average daily total ingest by month in Terabytes excluding weekends over the past year. For some reason I am struggling with this. Any help getting me start... See more...
I am looking to create a chart showing the average daily total ingest by month in Terabytes excluding weekends over the past year. For some reason I am struggling with this. Any help getting me started would be appreciated.
Afternoon, We are currently having issues with duplicate JSON entries on our search heads which operate in a clustered set up. I understand this is due to the data being read at index time and a... See more...
Afternoon, We are currently having issues with duplicate JSON entries on our search heads which operate in a clustered set up. I understand this is due to the data being read at index time and at search time, hence duplicating the fields.  I have read many other forums with similar issues. The fix suggested is to ensure to set the below in the props.conf on the search heads which we have deployed via an app. KV_MODE =  none  AUTO_KV_JSON = false  while keeping just the below on the props.conf on the forwarder; INDEXED_EXTRACTIONS = JSON  We have successfully tested this in a non clustered environment and it seems to work but in a clustered set up we are still seeing the duplicate values.   Any help or guidance would be greatly appreciated. 
I have two fields: DNS and DNS_Matched. The latter is a multi-value field. How can I see if a field value in DNS is in one  of the multi-value field in DNS_Matched? Example: DNS DNS_Matache... See more...
I have two fields: DNS and DNS_Matched. The latter is a multi-value field. How can I see if a field value in DNS is in one  of the multi-value field in DNS_Matched? Example: DNS DNS_Matached host1 host1 host1-a host1-r host2 host2 host2-a host2-r
I am trying to setup props & transforms to send DEBUG events to null queue i tried below regex but that doesnt seem to work Transofrms.conf- [setnull] REGEX = .+(DEBUG...).+$ DEST_KEY = queue FOR... See more...
I am trying to setup props & transforms to send DEBUG events to null queue i tried below regex but that doesnt seem to work Transofrms.conf- [setnull] REGEX = .+(DEBUG...).+$ DEST_KEY = queue FORMAT = nullQueue props.conf- [sourcetype::risktrac_log] TRANSFORMS-null=setnull I used  REGEX=\[\d{2}\/\d{2}\/\d{2}\s\d{2}:\d{2}:\d{2}:\d{3}\sEDT]\s+DEBUG\s.* as well but that too doesnt drop DEBUG messages  i just tried DEBUG in regex too, no help, can someone help me here please? sample event-  [10/13/23 03:46:48:551 EDT] DEBUG DocumentCleanup.run 117 : /_documents document cleanup complete. how does REGEX pick the pattern ? i can see both the REGEX are able to match whole event. we cant turn DEBUG off for the application
We are using Splunk Cloud 9.0.2303.201 and have version 9.0.4 of the Splunk Universal Forwarder installed on a RHEL 7.9 server. The UF is configured to monitor a log file that outputs JSON in this fo... See more...
We are using Splunk Cloud 9.0.2303.201 and have version 9.0.4 of the Splunk Universal Forwarder installed on a RHEL 7.9 server. The UF is configured to monitor a log file that outputs JSON in this format:   {"text": "Ending run - duration 0:00:00.249782\n", "record": {"elapsed": {"repr": "0:00:00.264696", "seconds": 0.264696}, "exception": null, "extra": {"run_id": "b20xlqbi", "action": "status"}, "file": {"name": "alb-handler.py", "path": "scripts/alb-handler.py"}, "function": "exit_handler", "level": {"icon": "", "name": "INFO", "no": 20}, "line": 79, "message": "Ending run - duration 0:00:00.249782", "module": "alb-handler", "name": "__main__", "process": {"id": 28342, "name": "MainProcess"}, "thread": {"id": 140068303431488, "name": "MainThread"}, "time": {"repr": "2023-10-13 10:09:54.452713-04:00", "timestamp": 1697206194.452713}}}   Long story short, it seems that Splunk is getting confused by the multiple fields in the JSON that look like timestamps. The timestamp that should be used is the very last field in the JSON. I first set up a custom sourcetype that's a clone of the _json sourcetype by manually inputting some of these records via Settings -> Add Data.  Using that tool I was able to get Splunk to recognize the correct timestamp via the following settings:   TIMESTAMP_FIELDS = record.time.timestamp TIME_FORMAT = %s.%6N     When I load the above record by hand via Settings -> Add Data and use my custom sourcetype with the above fields then Splunk shows the _time field is being set properly,  so in this case it's 10/13/23 10:09:54.452 AM. The exact same record, when loaded through the Universal Forwarder, appears to be ignoring the TIMESTAMP_FIELDS parameter. It ends up with a date/time of 10/13/23 12:00:00.249 AM, which indicates that it's trying to extract the date/time from the "text" field at the very beginning of the JSON (the string "duration 0:00:00.249782"). The inputs.conf on the Universal Forwarder is quite simple:   [monitor:///app/appman/logs/combined_log.json] sourcetype = python-loguru index = test disabled = 0     Why is the date/time parsing working properly when I manually load these logs via the UI but not when being imported via the Universal Forwarder?
I am attempting to setup an INGEST_EVAL for the _time field. My goal is to check if the _time field is in the future and prevent any future timestamps from being indexed. The INGEST_EVAL is configure... See more...
I am attempting to setup an INGEST_EVAL for the _time field. My goal is to check if the _time field is in the future and prevent any future timestamps from being indexed. The INGEST_EVAL is configured correctly in the props.conf, fields.conf and transforms.conf, but is failing when I attempt to use a conditional statement. My goal is to do something like this in my transforms.conf: [ingest_time_timestamp] INGEST_EVAL = ingest_time_stamp:=if(_time > time(), time(), _time) If _time is in the future, then I want it set to the current time, otherwise I want to leave it alone. Anyone have any ideas?
I want to extract Sample ID field value "Sample ID":"020ab888-a7ce-4e25-z8h8-a658bf21ech9"