All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Is it possible to append two searches? I have a search that ends in: | table A B C And I want to append to the above some values under A, B, C that I calculate. Can you tell me please the ... See more...
Hello Is it possible to append two searches? I have a search that ends in: | table A B C And I want to append to the above some values under A, B, C that I calculate. Can you tell me please the syntax for that? Thanks!
Hi, All.   How to index compressed files in .bz2 format using Universal Forwarder installed on a Windows server? In UF: inputs.conf [monitor://E:\LogServer\Logs\*.bz2] sourcetype = XmlWinEvent... See more...
Hi, All.   How to index compressed files in .bz2 format using Universal Forwarder installed on a Windows server? In UF: inputs.conf [monitor://E:\LogServer\Logs\*.bz2] sourcetype = XmlWinEventLog disabled=0 index = main   props.conf [source::...E:\\LogServer\\Logs\\*.bz2] sourcetype = XmlWinEventLog [XmlWinEventLog] invalid_cause = archive unarchive_cmd = _auto   According to the most recent docs Splunk does index compressed files: https://docs.splunk.com/Documentation/Splunk/8.2.1/Admin/Propsconf   But even following these instructions, the logs are still not indexed and I was also unable to check the splunkd.log logs for any error that indicates a problem. Does anyone have any suggestions?   Thanks in advance.   James \°/  
I've got my universal forwarders and heavy forwarders doing indexer discovery through the cluster master like so ...   ************************** * outputs.conf * ************************... See more...
I've got my universal forwarders and heavy forwarders doing indexer discovery through the cluster master like so ...   ************************** * outputs.conf * ************************** [indexer_discovery:clustermaster] pass4SymmKey = {password} master_uri = https://{my cluster master}.domain.foo:8089 [tcpout:clustermastergroup] indexerDiscovery = clustermaster useACK = true [tcpout] defaultGroup = clustermastergroup   Is there any reason I could not do the same in the cluster master's outputs.conf file? Basically, it would ask itself over 8098 who the peer nodes are.
I recently upgraded from 8.1 to 8.2.3, and noticed the message about migrating kvstore to wiredTIger. I decided to migrate, and followed the instructions here: https://docs.splunk.com/Documentation/... See more...
I recently upgraded from 8.1 to 8.2.3, and noticed the message about migrating kvstore to wiredTIger. I decided to migrate, and followed the instructions here: https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_higher_in_a_single-instance_deployment It failed because, I think, mongodump failed. The official reason in splunkd.log:   11-05-2021 14:10:57.695 -0700 ERROR MongodRunner [25826 MainThread] - MongtoolRunner exited with nonzero status=4 11-05-2021 14:10:57.695 -0700 ERROR KVStoreConfigurationProvider [25826 MainThread] - Failed to run mongodump, shutting down mongod   mongod.log output:     mongodump fatal error: unrecognized DWARF version in .debug_info at 6 mongodump runtime stack: mongodump panic during panic mongodump runtime stack: mongodump stack trace unavailable   I removed the migration line in server.conf, started splunk, and tried to backup kvstore (both statuses were "ready") , and it failed to create anything in kvstorebackup; here is the relevant splunkd.log output:   11-05-2021 14:54:31.221 -0700 INFO KVStoreBackupRestore [27091 KVStoreBackupThread] - backup started for archiveName="kvdump_1636149271", using method=2 11-05-2021 14:54:31.284 -0700 ERROR MongodRunner [41130 BackupRestoreWorkerThread] - MongtoolRunner exited with nonzero status=4 11-05-2021 14:54:31.284 -0700 WARN KVStoreBulletinBoardManager [41130 BackupRestoreWorkerThread] - Failed to backup KV Store. Check for errors in the splunkd.log file in the $SPLUNK_HOME/var/log/splunk directory.   with the same mongodump errors as before. Makes me think they are related. I checked my certificate (still good until 2024), permissions and ownerships, and all seem to be correct.   Any ideas?
I have a tstats search that isn't returning a count consistently. In the where clause, I have a subsearch for determining the time modifiers. Here's the search:   | tstats count from dat... See more...
I have a tstats search that isn't returning a count consistently. In the where clause, I have a subsearch for determining the time modifiers. Here's the search:   | tstats count from datamodel=Vulnerabilities.Vulnerabilities where index=qualys_i [| search earliest=-4d@d index=_internal host="its-splunk7-hf.ucsd.edu" sourcetype="ta_QualysCloudPlatform*" host_detection ("Done loading detections" OR "Running now") | stats `stime(_time)` `slist(_raw)` count by PID | eval duration = last_seen - first_seen ,earliest = strftime(first_seen - 300, "%m/%d/%Y:%H:%M:%S") ,latest = strftime(last_seen + 300, "%m/%d/%Y:%H:%M:%S") | where count > 1 AND duration < 82800 | sort -last_seen | head 1 | return earliest latest ]   If I run the subsearch on its own...   earliest=-4d@d index=_internal host="its-splunk7-hf.ucsd.edu" sourcetype="ta_QualysCloudPlatform*" host_detection ("Done loading detections" OR "Running now") | stats `stime(_time)` `slist(_raw)` count by PID | eval duration = last_seen - first_seen ,earliest = strftime(first_seen - 300, "%m/%d/%Y:%H:%M:%S") ,latest = strftime(last_seen + 300, "%m/%d/%Y:%H:%M:%S") ``` Exclude results that ran over 23 hours or didn't finish ``` | where count > 1 AND duration < 82800 | sort -last_seen | head 1 | return earliest latest   I get the time modifiers accurately (e.g., earliest="11/05/2021:06:25:51" latest="11/05/2021:11:31:12"). When I inspect the job (of the first search), it is able to derive the same time modifiers (in phase0, phase1, and remoteSearch). The issue - when I run the first search, my count is double. In other words, it's double counting each record. If I explicitly put the time modifiers in place of the subsearch, the count is accurate (not double). Anyone run into this?
I'm trying to post REST data via HTTP to splunk.  This works when using a pre-generated token to an HEC: POST /services/collector/event HTTP/1.0\r\nHost: galaxy.xypro.com\r\nContent-Type: applic... See more...
I'm trying to post REST data via HTTP to splunk.  This works when using a pre-generated token to an HEC: POST /services/collector/event HTTP/1.0\r\nHost: galaxy.xypro.com\r\nContent-Type: application/json\r\nKeep-Alive: 100\r\nConnection: keep-alive\r\nAuthorization: Splunk 1d07454b-d9ef-41b0-9450-59d8670a78c7\r\nContent-Length: 166\r\n\r\n{\"time\": 1636117458, \"host\": \"galaxy.xypro.com\", \"source\": \"test\", \"event\": { \"message\": \"2021-11-05:13:04:18.491986: Logging test message #0\", \"severity\": \"INFO\" } } HTTP/1.1 200 OK\r\nDate: Fri, 05 Nov 2021 20:05:17 GMT\r\nContent-Type: application/json; charset=UTF-8\r\nX-Content-Type-Options: nosniff\r\nContent-Length: 27\r\nVary: Authorization\r\nConnection: Keep-Alive\r\nX-Frame-Options: SAMEORIGIN\r\nServer: Splunkd\r\n\r\n{\"text\":\"Success\",\"code\":0} However, when I try to generate a session token to allow basic authorization, I see the following response, even though the user and password are correct: POST HTTPS://localhost:8089/services/auth/login HTTP/1.0\r\nHost: galaxy.xypro.com\r\nContent-Type: application/json\r\nKeep-Alive: 100\r\nConnection: keep-alive\r\nAuthorization: Basic a3B3YXRlcnNvbjpUZXN0MTIzNDU=\r\nContent-Length: 48\r\n\r\n{\"username\":\"kpwaterson\",\"password\":\"Test12345\"} HTTP/1.1 400 Bad Request\r\nDate: Fri, 05 Nov 2021 19:57:40 GMT\r\nExpires: Thu, 26 Oct 1978 00:00:00 GMT\r\nCache-Control: no-store, no-cache, must-revalidate, max-age=0\r\nContent-Type: text/xml; charset=UTF-8\r\nX-Content-Type-Options: nosniff\r\nContent-Length: 129\r\nConnection: Keep-Alive\r\nX-Frame-Options: SAMEORIGIN\r\nServer: Splunkd\r\n\r\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<response>\n <messages>\n <msg type=\"WARN\">Login failed</msg>\n </messages>\n</response>\n I was also investigating using receivers\simple for http messages.  Although the message is posted to splunk, a response is never received. POST /services/receivers/simple?source=NonStop&index=main&sourcetype=json_no_timestamp HTTP/1.0\r\nHost: galaxy.xypro.com\r\nContent-Type: application/json\r\nKeep-Alive: 100\r\nConnection: keep-alive\r\nAuthorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0IiwiYWxnIjoiSFM1MTIiLCJ2ZXIiOiJ2MiIsInR0eXAiOiJzdGF0aWMifQ.eyJpc3MiOiJrZW4ud2F0ZXJzb24gZnJvbSBWTS1ERVYtU1BMVU5LIiwic3ViIjoia2VuLndhdGVyc29uIiwiYXVkIjoiRGV2ZWxvcG1lbnQiLCJpZHAiOiJMREFQOi8vbWZhIiwianRpIjoiODUzMDYyZmFhZjA0NWY0Y2JlMWEyNGMxZWE3NTAyYjRmMjEwMGEyNzE0NzA1N2Q0MmUxOGVkYWRlMTYyZTlkZiIsImlhdCI6MTYzMzUyNTE5MywiZXhwIjoxNjM2MTE3MTkzLCJuYnIiOjE2MzM1MjUxOTN9.3TKSCeK52awMJDxNzfvfW4PNewsGVlKkFXSf0Vy1Dv7JH4DNH9Ogn_w5WZLkZkeNXmjJqU8opORXW7DjxA2eag\r\nContent-Length: 166\r\n\r\n{\"time\": 1636117714, \"host\":\"galaxy.xypro.com\", \"source\": \"test\", \"event\": { \"message\": \"2021-11-05:13:08:34.900042: Logging test message #0\", \"severity\": \"INFO\" } } Could you please let me know what may be the issue with generating the session key and why a response is not received from receivers/simple?  Thanks.          
I am installing splunk universal forwarder on an AWS elastic beanstalk environment to forward logs to our new splunk cloud application. Everything sets up correctly and I am able to find data searchi... See more...
I am installing splunk universal forwarder on an AWS elastic beanstalk environment to forward logs to our new splunk cloud application. Everything sets up correctly and I am able to find data searching the _internal index with the hostname of the instance. The problem is, no data of the file I'm monitoring is actually being forwarded, though I can tail the file and see it being updated when new logs from my web application are being added. I know the monitor succeeds, because in the AWS logs after a deployment I can see "2021-11-05 20:06:09,416 P3428 [INFO] Added monitor of '/tmp/logs/node.log'.", and I add it with: "/opt/splunkforwarder/bin/splunk add monitor "/tmp/logs/node.log" -hostname "$splunk_logs_hostname" -sourcetype json -index node" So if I understand this correctly, it should show up in my splunk application under the "node" index. But when I search for it nothing comes up, and if I go to settings > indexes where I created the index, there's no events or current size. Does anyone have any ideas on how to troubleshoot this issue?
My python is 3.8.5 and splunk-sdk is 1.6.16.  My Splunk developer gives me a URL and I get its search string to retrieve data as shown below. Below is my search string and additional python code... See more...
My python is 3.8.5 and splunk-sdk is 1.6.16.  My Splunk developer gives me a URL and I get its search string to retrieve data as shown below. Below is my search string and additional python code: search/earliest/latest are added after copy/paste search string. SEARCH_STRING = f"""     search sourcetype="builder:payeeservice" host="JWPP*BLDR*P*" "*PayeeAddResponse" "*" "*" "*" "*" "*" "*" "*"     earliest=-1h@h latest=-0h@h     |rex d5p1:Description>(?<Description>.*</d5p1:Description>)     |eval Description = replace(Description,"<[/]*[d]5p1:[\S]*>|<[d]5p1:[\S\s\"\=]*/>", "")     |rex "GU\(((?P<SponsorId>[^;]+);(?P<SubscriberId>[^;]+);(?P<SessionId>[^;]*);(?P<CorrelationId>[^;]+);(?P<Version>\w+))\)"     |table _time,SponsorId, SubscriberId,SessionId, CorrelationId,Description     |join type=left CorrelationId [search sourcetype="builder:payeeservice" host="JWPP*BLDR*P*"  "*AdditionalInformation*" |xmlkv ]     |eval Timestamp = if((TenantId != ""),Timestamp,_time),PayeeName = if((TenantId != ""),PayeeName,""), Message = if((Description != ""),Description,Message), Exception = if((TenantId != ""),Exception,""), Address = if((TenantId != ""),Address,""), PayeeType = if((TenantId != ""),PayeeType,""),MerchantId = if((TenantId != ""),MerchantId,""),AccountNumber = if((TenantId != ""),AccountNumber,""),SubscriberId = if((TenantId != ""),UserId,SubscriberId),SponsorId = if((TenantId != ""),TenantId,SponsorId)     |table Timestamp, SponsorId,SubscriberId, PayeeName,Message,Exception,CorrelationId,SessionId,PayeeName,Address,PayeeType,MerchantId,AccountNumber """ import splunklib.results as results service = connect_Splunk() rr = results.ResultsReader(service.jobs.create(SEARCH_STRING)) ord_list = [] for result in rr:     if isinstance(result, results.Message):         #skip messages         pass     elif isinstance(result, dict):         # Normal events are returned as dicts         ord_list.append(result)   I get this error so something is wrong in my search string.  How to fix it? splunklib.binding.HTTPError: HTTP 400 Bad Request -- Error in 'SearchParser': Mismatched ']'.   Thanks.  
I am trying to use the following to get "Splunk Workload Pricing Estimate" but as stated above it errors out. Do u have a better script / SPL to share please? Thanks a million.   index=_introspecti... See more...
I am trying to use the following to get "Splunk Workload Pricing Estimate" but as stated above it errors out. Do u have a better script / SPL to share please? Thanks a million.   index=_introspection earliest=-30d component=Hostwide   [|inputlookup dmc_assets   | table serverName as host, search_group   | search search_group=*dmc_group_index* OR search_group=*dmc_group_search_head*   | table host ]   | eval cpu_util = ('data.cpu_user_pct' + 'data.cpu_system_pct')   | bin _time span=5m   | table _time host data.cpu_count data.virtual_cpu_count data.cpu_idle_pct data.cpu_idle_pct cpu_util   ```5-min Roll-Up```   | stats max(data.cpu_count) AS physical_cores, max(data.virtual_cpu_count) AS numberOfVirtualCores,   max(cpu_util) as CPU_util_pct_max   by _time host   | eval max_5minCPUsUsed = CPU_util_pct_max*numberOfVirtualCores/100   | stats values(host) as host_list dc(host) as total_hosts sum(physical_cores) as physical_cores sum(numberOfVirtualCores) as numberOfVirtualCores   sum(max_5minCPUsUsed) as max_5minCPUsUsed   by _time   ```24h Roll-Up```   | bin _time span=1d   | stats values(host_list) as host_list max(total_hosts) as total_hosts max(physical_cores) as physical_cores max(numberOfVirtualCores) as numberOfVirtualCores   p90(max_5minCPUsUsed) as p90Daily_5minMax_CPUsUsed   by _time   ```Month Roll-Up```   | appendpipe [   | stats max(total_hosts) as total_hosts, max(physical_cores) as physical_cores, max(numberOfVirtualCores) as numberOfVirtualCores,   p90(p90Daily_5minMax_CPUsUsed) as p90Daily_5minMax_CPUsUsed   | eval _time="90th Perc. across report duration (equivalent to 3 days out of 30)"]   | eval p90Daily_5minMax_CPUsUsed=round(p90Daily_5minMax_CPUsUsed,2)  
Hello, Need some help here.  The goal is to pass one IP_Address found in inner search to outer search. IP is correctly extracted, but I'm getting following error from "where" command and clueless a... See more...
Hello, Need some help here.  The goal is to pass one IP_Address found in inner search to outer search. IP is correctly extracted, but I'm getting following error from "where" command and clueless at this point.  Here's the error: Error in 'where' command: The operator at '10.132.195.72' is invalid. And here's the search: index=ipam sourcetype=data earliest=-48h latest=now() | where cidrmatch(name, IP_Address) [ search index=networksessions sourcetype=microsoft:dhcp (Description=Renew OR Description=Assign OR Description=Conflict) earliest=-15min latest=now() | head 1 | return ($IP_Address) ]  
Hello, I am new to splunk and having an issue with the following command: SendersMNO="*" NOT ("VZ", "0", "Undefined") | where SenderType= "Standard"| stats count as Complaints by SendersAddress | so... See more...
Hello, I am new to splunk and having an issue with the following command: SendersMNO="*" NOT ("VZ", "0", "Undefined") | where SenderType= "Standard"| stats count as Complaints by SendersAddress | sort 10 -Complaints | table SendersAddress, SendersMNO, Complaints   The command work; however, the result column for SendersMNO is not producing any results, any reason why? All help is appreciated.
Add-on: https://splunkbase.splunk.com/app/3662/ Known Affected: 4.8.1 Symptoms: You begin to predominantly see Hexadecimal events in your Cisco FireSIGHT Index/Sourcetype instead of real data, an... See more...
Add-on: https://splunkbase.splunk.com/app/3662/ Known Affected: 4.8.1 Symptoms: You begin to predominantly see Hexadecimal events in your Cisco FireSIGHT Index/Sourcetype instead of real data, and you see large gaps between events (usually ~10 minutes, the time it takes for it to roll over a file). The 'Source' also ends with '.log.swp' instead of '.log'. Cause: $SPLUNK_HOME/etc/apps/TA-eStreamer/default/inputs.conf  [monitor://$SPLUNK_HOME/etc/apps/TA-eStreamer/bin/encore/data] disabled = 0 source = encore sourcetype = cisco:estreamer:data crcSalt = <SOURCE> The issue I believe is with the bolded line 'source = encore' because 'crcSalt = <SOURCE>' is also specified. Since all files have the same Source, all files have the same crcSalt which is why the actual '.log' is not collected. The '.swp' manages to get collected as Splunk checks the '.log' and since swp is a very short lived file Splunk accidentally collects a lot of garbage unrelated to the actual file contents (sorry Linux Admins for butchering the technical detail). Solution: Edit $SPLUNK_HOME/etc/apps/TA-eStreamer/default/inputs.conf and comment out the Source line, then restart Splunk services.   If someone knows of a way to override (via Local inputs.conf) source back with the filename (which changes frequently) so editing a Default inputs.conf is not necessary, please comment below. Those with the Cisco license allowing TAC Support on this add-on may want to raise this issue with them so they can fix it for new downloads and future versions -- I lack that particular license. Hope this helps someone (I did a search for encore and hex and didn't see any prior conversation on the topic).
I have a set up a single-node test instance of Splunk to try and ingest zScaler LSS (not NSS) logs via a TCP input. However, it is not ingesting any data, despite being able to see traffic via TCPDum... See more...
I have a set up a single-node test instance of Splunk to try and ingest zScaler LSS (not NSS) logs via a TCP input. However, it is not ingesting any data, despite being able to see traffic via TCPDump on that port I have installed the latest zScaler Splunk App (v2.0.7) and the zScaler Technical Add-on (v3.1.2)     [root@ip-10-127-0-113 apps]# ls | grep scaler TA-Zscaler_CIM zscalersplunkapp     via the WebUI, I have set up a TCP input on port 10000, set the sourcetype, app and index options. I have checked to make sure that Splunk is listening on TCP/10000 and can see that it is     [root@ip-10-127-0-113 apps]# netstat -antp | grep 10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 7992/splunkd tcp 0 0 10.127.0.113:10000 x.x.x.x:38392 SYN_RECV - tcp 0 0 10.127.0.113:10000 x.x.x.x:51586 SYN_RECV - tcp 0 0 10.127.0.113:10000 x.x.x.x:53844 SYN_RECV -     I can't see any errors in the _internal index (although I could be searching wrong). I'm using the below search:     index=_internal "err*"     The only errors I can see relate to the 'summarize' command. Any pointers would be really appreciated. Many thanks,  
Hi  Community, How to display the saved search report to make it to  open in statistic mode and allow for downloading of a .csv file .    Currently I  have to open the report in search to get the op... See more...
Hi  Community, How to display the saved search report to make it to  open in statistic mode and allow for downloading of a .csv file .    Currently I  have to open the report in search to get the option to download the .csv .  I added "display.statistics.show = 1" option in the saved search    Thank you
Hi, I am using splunk cloud  and  I need to disable some indexes temporarily. I am using AWS add-on app to ship AWS ALB logs from an S3 bucket. My daily ingestion data is going beyond the license and... See more...
Hi, I am using splunk cloud  and  I need to disable some indexes temporarily. I am using AWS add-on app to ship AWS ALB logs from an S3 bucket. My daily ingestion data is going beyond the license and I would like to diasble these indexes temporarily.    I can see there is an option to disable an input in the inputs section, but same option is not available for index. Although in the index listing page it shows as enabled in the last column.  Would appreciate if someone has any solution for the problem mentioned above. Thanks.      Muzeeb
Hello! My objective is to read the values of a Spunk table visualization from a dashboard into a JavaScript object for further processing.  I'm not sure what object yet, but my main issue lies with ... See more...
Hello! My objective is to read the values of a Spunk table visualization from a dashboard into a JavaScript object for further processing.  I'm not sure what object yet, but my main issue lies with iterating through the table and extracting the cell values. Can anybody provide some sample JS code for identifying the table object and interating through its values? Thanks! Andrew
Hi I would like to know the list of users logging in from which region/ip
Hi all, I'm trying to find which programs from a given list haven't raised an event in the eventlog in the last timeperiod to create an alert based on it. For an individual alert I have  index=eve... See more...
Hi all, I'm trying to find which programs from a given list haven't raised an event in the eventlog in the last timeperiod to create an alert based on it. For an individual alert I have  index=eventlogs SourceName="my program" | stats count as COUNT_HEARTBEAT | where COUNT_HEARTBEAT=0 which works. How can I supply a list of programs and list which of them have a COUNT_HEARTBEAT of 0 so that I can make a generic alert?   Thanks,   Kind regards,   Ian
Hi Guys, I am new to splunk. I need to run a query to extract the system name value which is repeated twice in the same log event. Logs in one event are: user: user1 system: system1 user:user2 system:... See more...
Hi Guys, I am new to splunk. I need to run a query to extract the system name value which is repeated twice in the same log event. Logs in one event are: user: user1 system: system1 user:user2 system: system2 output should look like below: output1 output2 system1 system2 cheers.
Hello Everyone,  I am working on a dashboard with 2 event panel . and i would like to use the outcome of panel 1 as an input to my panel 2 . Can you please advise what is the optimal way to take a s... See more...
Hello Everyone,  I am working on a dashboard with 2 event panel . and i would like to use the outcome of panel 1 as an input to my panel 2 . Can you please advise what is the optimal way to take a specific field output and utilise as an input in the next panel . I tried base search but did not provide result as expected. Panel 1 : <query>index=xyz sourcetype=vpn *session* | fields session, connection_name, DNS, ip_subnet, Location,user | stats values(connection_name) as connection, values(Dns) as DNS, by session | join type=inner session [ search index=abc sourcetype=vpn *Dynamic* | fields assigned_ip,session | stats values(assigned_ip) as IP by session] | table User,session,connection_name,ip_subnet,IP,DNS,Location |where user="$field1$" OR connection_name="$field2$" OR session="$field3$"</query>  Once the output is generated for the above query , i would like to leverage the value displayed for Ip_subnet and use that as input for panel 2  Panel 2: <query>|inputlookup letest.csv |rename "IP address details" as IP | xyseries Ip_subnet,Location,IP | where Ip_subnet="$Ip_subnet$"</query> In panel 2 $Ip_subnet$ is input that would be taken from value of Ip_subnet of panel 1.