All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm really annoyed,  I am using SPLUNK Enterprise and I'm literally tryin to parse out some JSON (basically a String) from my Splunk Logs that has linebreaks after each field/key in the JSON string r... See more...
I'm really annoyed,  I am using SPLUNK Enterprise and I'm literally tryin to parse out some JSON (basically a String) from my Splunk Logs that has linebreaks after each field/key in the JSON string result , i.e.   Some random search results here { key1: value1 key2: value2 key3: value3 }, some log message here     .... Like .* and many other REGEX chars work just fine in the search for some reason I tried all combinations of [\r\n\s]+ and such and get 0 results despite it working just fine in regex101.com online sandbox environment  I think I read online from my searches that Splunk logs don't preserve the linebreaks, but if it doesn't do that, then what is the final result looking like then? because I tried querying with out whitespaces, or linebreaks, and every combination under the sun, and never got a "hit" back on my search results. Also, I'm not using any of that REX crap as I don't need to extract anything; I just wanted to filter and maybe do a stats count on my results    Can anyone provide a simple solution please thank you!
Hello I am a user of some dashboards and not admin/dev.   Is it possible that I get an email whenever the search code of a dashboard changes?   Thanks!
Hi folks, Splunk Enterprise. Version:7.1.0 I have a dashboard with many daily scheduled report, one panel for each. The report scheduling works normally, I can see the latest report in "view recen... See more...
Hi folks, Splunk Enterprise. Version:7.1.0 I have a dashboard with many daily scheduled report, one panel for each. The report scheduling works normally, I can see the latest report in "view recent" but my dashboard does not load the latest report. I tried add autorefresh, or manually clicking the refresh button on each panel but it still display the old report.   <dashboard refresh="30"> <label>My Dashboard</label> <row> <panel> <title>Panel 1 title</title> <table> <search ref="Report - Panel 1"></search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <title>Panel 2</title> <table> <search ref="Report - Panel 2"></search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row>     Checked around Splunk forum, but non solution work for me. What should I do to have this work properly? TIA.    
Hello, I am looking for Splunk 7.2.1 to simulate a customer environment for troubleshooting and upgrade simulation. Appreciate if anyone can share the download links or share the binaries, that woul... See more...
Hello, I am looking for Splunk 7.2.1 to simulate a customer environment for troubleshooting and upgrade simulation. Appreciate if anyone can share the download links or share the binaries, that would be really helpful. TIA. mvRishipur.
How will i use Splunk to investigate an Excessive Failed login alert and what are things to look for? Thanks,
Actually i was downloaded to trial version but i can't login showing one error username and password wrong. how can i get log in Splunk trial version.
Hi Everyone, I need to compare 2 fields with like command but I cant do it even if I tried many solutions. For Example; event1 field1="raceCar" field2="car" event2 field1="trying" field2="hell... See more...
Hi Everyone, I need to compare 2 fields with like command but I cant do it even if I tried many solutions. For Example; event1 field1="raceCar" field2="car" event2 field1="trying" field2="hello" event3 field1="splunk" field2="helloSplunkEnterprise" Desired result: event1 result=hit event2 result=miss event3 result=hit  I tried | eval results= if (match()) but didnt work Is there any suggestion about this SPL? Thanks alot for your helps
Hello all, I have an issue with my DB connect. It won't fetch rows from any table in a Postgres database, but it will show table names and rows included in the table. When I select a table, the loa... See more...
Hello all, I have an issue with my DB connect. It won't fetch rows from any table in a Postgres database, but it will show table names and rows included in the table. When I select a table, the loading bar goes to 20 percent, and it will stick there.        
Hello, I have a HF running in Linux machine. I have root access to that machine using sudo bash  as sudo - splunk or su - splunk is  not allowing me to get root access. But, when I copy files to the... See more...
Hello, I have a HF running in Linux machine. I have root access to that machine using sudo bash  as sudo - splunk or su - splunk is  not allowing me to get root access. But, when I copy files to the folders  where monitor command pointing to pickup the files,  it is not forwarding events to the SPLUNK indexer since I cannot see those events within SPLUNK. However, when I type chown -R splunk: splunk/opt/splunk and then restart SPLUNK, it's working as expected, that means I can see those events within SPLUNK. So, every time when I copy  files within HF folders, I need to use chown command and restart SPLUNK to make them available within SPLUNK. Is there anyway this can be resolved that I don't need to type chown command and restart SPLUNK to forward events.  Thank you so much.
I have a current output in the form of a table with rows representing the time spent in various checkpoints and the last row being the total time.  I would like to calculate the percentage of each ro... See more...
I have a current output in the form of a table with rows representing the time spent in various checkpoints and the last row being the total time.  I would like to calculate the percentage of each row in relation to the total row "Total Duration" row.  if this is not possible, I am ok with calculated the percentage based on the sum of all the p50/p90 column values as well.  Marker                           P50         P90 ------------------------------------------- Point 1 Duration         10            20 Point 2 Duration         40            100 Point 3 Duration         50            80 Total Duration             100         200 and I would like to insert a column for the percentage like this (the 100% in the bottom row is optional) Marker                           P50         P50%             P90            P90% -------------------------------------------------------------------------------------- Point 1 Duration         10             10%              20                10% Point 2 Duration         40            40%               100             50% Point 3 Duration         50            50%               80                40% Total Duration             100         100%            200             100% Thank you very much
Hello   Can I use XML for searches/alerts? Is there any reference? Can you provide an example to perform a search for a particular view?   Thanks!
Hello Is it possible to append two searches? I have a search that ends in: | table A B C And I want to append to the above some values under A, B, C that I calculate. Can you tell me please the ... See more...
Hello Is it possible to append two searches? I have a search that ends in: | table A B C And I want to append to the above some values under A, B, C that I calculate. Can you tell me please the syntax for that? Thanks!
Hi, All.   How to index compressed files in .bz2 format using Universal Forwarder installed on a Windows server? In UF: inputs.conf [monitor://E:\LogServer\Logs\*.bz2] sourcetype = XmlWinEvent... See more...
Hi, All.   How to index compressed files in .bz2 format using Universal Forwarder installed on a Windows server? In UF: inputs.conf [monitor://E:\LogServer\Logs\*.bz2] sourcetype = XmlWinEventLog disabled=0 index = main   props.conf [source::...E:\\LogServer\\Logs\\*.bz2] sourcetype = XmlWinEventLog [XmlWinEventLog] invalid_cause = archive unarchive_cmd = _auto   According to the most recent docs Splunk does index compressed files: https://docs.splunk.com/Documentation/Splunk/8.2.1/Admin/Propsconf   But even following these instructions, the logs are still not indexed and I was also unable to check the splunkd.log logs for any error that indicates a problem. Does anyone have any suggestions?   Thanks in advance.   James \°/  
I've got my universal forwarders and heavy forwarders doing indexer discovery through the cluster master like so ...   ************************** * outputs.conf * ************************... See more...
I've got my universal forwarders and heavy forwarders doing indexer discovery through the cluster master like so ...   ************************** * outputs.conf * ************************** [indexer_discovery:clustermaster] pass4SymmKey = {password} master_uri = https://{my cluster master}.domain.foo:8089 [tcpout:clustermastergroup] indexerDiscovery = clustermaster useACK = true [tcpout] defaultGroup = clustermastergroup   Is there any reason I could not do the same in the cluster master's outputs.conf file? Basically, it would ask itself over 8098 who the peer nodes are.
I recently upgraded from 8.1 to 8.2.3, and noticed the message about migrating kvstore to wiredTIger. I decided to migrate, and followed the instructions here: https://docs.splunk.com/Documentation/... See more...
I recently upgraded from 8.1 to 8.2.3, and noticed the message about migrating kvstore to wiredTIger. I decided to migrate, and followed the instructions here: https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_higher_in_a_single-instance_deployment It failed because, I think, mongodump failed. The official reason in splunkd.log:   11-05-2021 14:10:57.695 -0700 ERROR MongodRunner [25826 MainThread] - MongtoolRunner exited with nonzero status=4 11-05-2021 14:10:57.695 -0700 ERROR KVStoreConfigurationProvider [25826 MainThread] - Failed to run mongodump, shutting down mongod   mongod.log output:     mongodump fatal error: unrecognized DWARF version in .debug_info at 6 mongodump runtime stack: mongodump panic during panic mongodump runtime stack: mongodump stack trace unavailable   I removed the migration line in server.conf, started splunk, and tried to backup kvstore (both statuses were "ready") , and it failed to create anything in kvstorebackup; here is the relevant splunkd.log output:   11-05-2021 14:54:31.221 -0700 INFO KVStoreBackupRestore [27091 KVStoreBackupThread] - backup started for archiveName="kvdump_1636149271", using method=2 11-05-2021 14:54:31.284 -0700 ERROR MongodRunner [41130 BackupRestoreWorkerThread] - MongtoolRunner exited with nonzero status=4 11-05-2021 14:54:31.284 -0700 WARN KVStoreBulletinBoardManager [41130 BackupRestoreWorkerThread] - Failed to backup KV Store. Check for errors in the splunkd.log file in the $SPLUNK_HOME/var/log/splunk directory.   with the same mongodump errors as before. Makes me think they are related. I checked my certificate (still good until 2024), permissions and ownerships, and all seem to be correct.   Any ideas?
I have a tstats search that isn't returning a count consistently. In the where clause, I have a subsearch for determining the time modifiers. Here's the search:   | tstats count from dat... See more...
I have a tstats search that isn't returning a count consistently. In the where clause, I have a subsearch for determining the time modifiers. Here's the search:   | tstats count from datamodel=Vulnerabilities.Vulnerabilities where index=qualys_i [| search earliest=-4d@d index=_internal host="its-splunk7-hf.ucsd.edu" sourcetype="ta_QualysCloudPlatform*" host_detection ("Done loading detections" OR "Running now") | stats `stime(_time)` `slist(_raw)` count by PID | eval duration = last_seen - first_seen ,earliest = strftime(first_seen - 300, "%m/%d/%Y:%H:%M:%S") ,latest = strftime(last_seen + 300, "%m/%d/%Y:%H:%M:%S") | where count > 1 AND duration < 82800 | sort -last_seen | head 1 | return earliest latest ]   If I run the subsearch on its own...   earliest=-4d@d index=_internal host="its-splunk7-hf.ucsd.edu" sourcetype="ta_QualysCloudPlatform*" host_detection ("Done loading detections" OR "Running now") | stats `stime(_time)` `slist(_raw)` count by PID | eval duration = last_seen - first_seen ,earliest = strftime(first_seen - 300, "%m/%d/%Y:%H:%M:%S") ,latest = strftime(last_seen + 300, "%m/%d/%Y:%H:%M:%S") ``` Exclude results that ran over 23 hours or didn't finish ``` | where count > 1 AND duration < 82800 | sort -last_seen | head 1 | return earliest latest   I get the time modifiers accurately (e.g., earliest="11/05/2021:06:25:51" latest="11/05/2021:11:31:12"). When I inspect the job (of the first search), it is able to derive the same time modifiers (in phase0, phase1, and remoteSearch). The issue - when I run the first search, my count is double. In other words, it's double counting each record. If I explicitly put the time modifiers in place of the subsearch, the count is accurate (not double). Anyone run into this?
I'm trying to post REST data via HTTP to splunk.  This works when using a pre-generated token to an HEC: POST /services/collector/event HTTP/1.0\r\nHost: galaxy.xypro.com\r\nContent-Type: applic... See more...
I'm trying to post REST data via HTTP to splunk.  This works when using a pre-generated token to an HEC: POST /services/collector/event HTTP/1.0\r\nHost: galaxy.xypro.com\r\nContent-Type: application/json\r\nKeep-Alive: 100\r\nConnection: keep-alive\r\nAuthorization: Splunk 1d07454b-d9ef-41b0-9450-59d8670a78c7\r\nContent-Length: 166\r\n\r\n{\"time\": 1636117458, \"host\": \"galaxy.xypro.com\", \"source\": \"test\", \"event\": { \"message\": \"2021-11-05:13:04:18.491986: Logging test message #0\", \"severity\": \"INFO\" } } HTTP/1.1 200 OK\r\nDate: Fri, 05 Nov 2021 20:05:17 GMT\r\nContent-Type: application/json; charset=UTF-8\r\nX-Content-Type-Options: nosniff\r\nContent-Length: 27\r\nVary: Authorization\r\nConnection: Keep-Alive\r\nX-Frame-Options: SAMEORIGIN\r\nServer: Splunkd\r\n\r\n{\"text\":\"Success\",\"code\":0} However, when I try to generate a session token to allow basic authorization, I see the following response, even though the user and password are correct: POST HTTPS://localhost:8089/services/auth/login HTTP/1.0\r\nHost: galaxy.xypro.com\r\nContent-Type: application/json\r\nKeep-Alive: 100\r\nConnection: keep-alive\r\nAuthorization: Basic a3B3YXRlcnNvbjpUZXN0MTIzNDU=\r\nContent-Length: 48\r\n\r\n{\"username\":\"kpwaterson\",\"password\":\"Test12345\"} HTTP/1.1 400 Bad Request\r\nDate: Fri, 05 Nov 2021 19:57:40 GMT\r\nExpires: Thu, 26 Oct 1978 00:00:00 GMT\r\nCache-Control: no-store, no-cache, must-revalidate, max-age=0\r\nContent-Type: text/xml; charset=UTF-8\r\nX-Content-Type-Options: nosniff\r\nContent-Length: 129\r\nConnection: Keep-Alive\r\nX-Frame-Options: SAMEORIGIN\r\nServer: Splunkd\r\n\r\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<response>\n <messages>\n <msg type=\"WARN\">Login failed</msg>\n </messages>\n</response>\n I was also investigating using receivers\simple for http messages.  Although the message is posted to splunk, a response is never received. POST /services/receivers/simple?source=NonStop&index=main&sourcetype=json_no_timestamp HTTP/1.0\r\nHost: galaxy.xypro.com\r\nContent-Type: application/json\r\nKeep-Alive: 100\r\nConnection: keep-alive\r\nAuthorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0IiwiYWxnIjoiSFM1MTIiLCJ2ZXIiOiJ2MiIsInR0eXAiOiJzdGF0aWMifQ.eyJpc3MiOiJrZW4ud2F0ZXJzb24gZnJvbSBWTS1ERVYtU1BMVU5LIiwic3ViIjoia2VuLndhdGVyc29uIiwiYXVkIjoiRGV2ZWxvcG1lbnQiLCJpZHAiOiJMREFQOi8vbWZhIiwianRpIjoiODUzMDYyZmFhZjA0NWY0Y2JlMWEyNGMxZWE3NTAyYjRmMjEwMGEyNzE0NzA1N2Q0MmUxOGVkYWRlMTYyZTlkZiIsImlhdCI6MTYzMzUyNTE5MywiZXhwIjoxNjM2MTE3MTkzLCJuYnIiOjE2MzM1MjUxOTN9.3TKSCeK52awMJDxNzfvfW4PNewsGVlKkFXSf0Vy1Dv7JH4DNH9Ogn_w5WZLkZkeNXmjJqU8opORXW7DjxA2eag\r\nContent-Length: 166\r\n\r\n{\"time\": 1636117714, \"host\":\"galaxy.xypro.com\", \"source\": \"test\", \"event\": { \"message\": \"2021-11-05:13:08:34.900042: Logging test message #0\", \"severity\": \"INFO\" } } Could you please let me know what may be the issue with generating the session key and why a response is not received from receivers/simple?  Thanks.          
I am installing splunk universal forwarder on an AWS elastic beanstalk environment to forward logs to our new splunk cloud application. Everything sets up correctly and I am able to find data searchi... See more...
I am installing splunk universal forwarder on an AWS elastic beanstalk environment to forward logs to our new splunk cloud application. Everything sets up correctly and I am able to find data searching the _internal index with the hostname of the instance. The problem is, no data of the file I'm monitoring is actually being forwarded, though I can tail the file and see it being updated when new logs from my web application are being added. I know the monitor succeeds, because in the AWS logs after a deployment I can see "2021-11-05 20:06:09,416 P3428 [INFO] Added monitor of '/tmp/logs/node.log'.", and I add it with: "/opt/splunkforwarder/bin/splunk add monitor "/tmp/logs/node.log" -hostname "$splunk_logs_hostname" -sourcetype json -index node" So if I understand this correctly, it should show up in my splunk application under the "node" index. But when I search for it nothing comes up, and if I go to settings > indexes where I created the index, there's no events or current size. Does anyone have any ideas on how to troubleshoot this issue?
My python is 3.8.5 and splunk-sdk is 1.6.16.  My Splunk developer gives me a URL and I get its search string to retrieve data as shown below. Below is my search string and additional python code... See more...
My python is 3.8.5 and splunk-sdk is 1.6.16.  My Splunk developer gives me a URL and I get its search string to retrieve data as shown below. Below is my search string and additional python code: search/earliest/latest are added after copy/paste search string. SEARCH_STRING = f"""     search sourcetype="builder:payeeservice" host="JWPP*BLDR*P*" "*PayeeAddResponse" "*" "*" "*" "*" "*" "*" "*"     earliest=-1h@h latest=-0h@h     |rex d5p1:Description>(?<Description>.*</d5p1:Description>)     |eval Description = replace(Description,"<[/]*[d]5p1:[\S]*>|<[d]5p1:[\S\s\"\=]*/>", "")     |rex "GU\(((?P<SponsorId>[^;]+);(?P<SubscriberId>[^;]+);(?P<SessionId>[^;]*);(?P<CorrelationId>[^;]+);(?P<Version>\w+))\)"     |table _time,SponsorId, SubscriberId,SessionId, CorrelationId,Description     |join type=left CorrelationId [search sourcetype="builder:payeeservice" host="JWPP*BLDR*P*"  "*AdditionalInformation*" |xmlkv ]     |eval Timestamp = if((TenantId != ""),Timestamp,_time),PayeeName = if((TenantId != ""),PayeeName,""), Message = if((Description != ""),Description,Message), Exception = if((TenantId != ""),Exception,""), Address = if((TenantId != ""),Address,""), PayeeType = if((TenantId != ""),PayeeType,""),MerchantId = if((TenantId != ""),MerchantId,""),AccountNumber = if((TenantId != ""),AccountNumber,""),SubscriberId = if((TenantId != ""),UserId,SubscriberId),SponsorId = if((TenantId != ""),TenantId,SponsorId)     |table Timestamp, SponsorId,SubscriberId, PayeeName,Message,Exception,CorrelationId,SessionId,PayeeName,Address,PayeeType,MerchantId,AccountNumber """ import splunklib.results as results service = connect_Splunk() rr = results.ResultsReader(service.jobs.create(SEARCH_STRING)) ord_list = [] for result in rr:     if isinstance(result, results.Message):         #skip messages         pass     elif isinstance(result, dict):         # Normal events are returned as dicts         ord_list.append(result)   I get this error so something is wrong in my search string.  How to fix it? splunklib.binding.HTTPError: HTTP 400 Bad Request -- Error in 'SearchParser': Mismatched ']'.   Thanks.  
I am trying to use the following to get "Splunk Workload Pricing Estimate" but as stated above it errors out. Do u have a better script / SPL to share please? Thanks a million.   index=_introspecti... See more...
I am trying to use the following to get "Splunk Workload Pricing Estimate" but as stated above it errors out. Do u have a better script / SPL to share please? Thanks a million.   index=_introspection earliest=-30d component=Hostwide   [|inputlookup dmc_assets   | table serverName as host, search_group   | search search_group=*dmc_group_index* OR search_group=*dmc_group_search_head*   | table host ]   | eval cpu_util = ('data.cpu_user_pct' + 'data.cpu_system_pct')   | bin _time span=5m   | table _time host data.cpu_count data.virtual_cpu_count data.cpu_idle_pct data.cpu_idle_pct cpu_util   ```5-min Roll-Up```   | stats max(data.cpu_count) AS physical_cores, max(data.virtual_cpu_count) AS numberOfVirtualCores,   max(cpu_util) as CPU_util_pct_max   by _time host   | eval max_5minCPUsUsed = CPU_util_pct_max*numberOfVirtualCores/100   | stats values(host) as host_list dc(host) as total_hosts sum(physical_cores) as physical_cores sum(numberOfVirtualCores) as numberOfVirtualCores   sum(max_5minCPUsUsed) as max_5minCPUsUsed   by _time   ```24h Roll-Up```   | bin _time span=1d   | stats values(host_list) as host_list max(total_hosts) as total_hosts max(physical_cores) as physical_cores max(numberOfVirtualCores) as numberOfVirtualCores   p90(max_5minCPUsUsed) as p90Daily_5minMax_CPUsUsed   by _time   ```Month Roll-Up```   | appendpipe [   | stats max(total_hosts) as total_hosts, max(physical_cores) as physical_cores, max(numberOfVirtualCores) as numberOfVirtualCores,   p90(p90Daily_5minMax_CPUsUsed) as p90Daily_5minMax_CPUsUsed   | eval _time="90th Perc. across report duration (equivalent to 3 days out of 30)"]   | eval p90Daily_5minMax_CPUsUsed=round(p90Daily_5minMax_CPUsUsed,2)