All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, am I doing this correct or is there another way to tabulate this JSON? I've seen many examples on the forums of people using mvexpand and mvzip to tabulate their JSON but this is working with ju... See more...
Hi, am I doing this correct or is there another way to tabulate this JSON? I've seen many examples on the forums of people using mvexpand and mvzip to tabulate their JSON but this is working with just a few fields rather than a handful and not to any depths. I have events pulled from the splunk add-on for microsoft cloud services. As you may already know, this pulls body.records{}->[Many subitems in an array]... for individual events: One (redacted) event in splunk may look like so: { body: { records: [ { backendPoolName: XXXX-pool backendSettingName: XXXX-backend-setting category: ApplicationGatewayAccessLog listenerName: XXXX-ssl-listener operationName: ApplicationGatewayAccess properties: { clientIP: 4.5.6.7 clientPort: 56522 host: xxxx httpMethod: GET httpStatus: 302 httpVersion: HTTP/1.1 instanceId: appgw_1 originalHost: XXXX originalRequestUriWithArgs: /XXXX receivedBytes: 386 requestQuery: client_id=XXXX requestUri: /XXXX sentBytes: 1404 serverResponseLatency: 0.012 serverRouted: XXXX serverStatus: 302 sslCipher: ECDHE-RSA-AES256-GCM-SHA384 sslClientCertificateFingerprint: sslClientCertificateIssuerName: sslClientVerify: NONE sslEnabled: on sslProtocol: TLSv1.2 timeTaken: 0.013 transactionId: XXXX userAgent: Mozilla/5.0 (compatible; AhrefsBot/7.0; +http://ahrefs.com/robot/) } resourceId: /SUBSCRIPTIONS/XXXX/RESOURCEGROUPS/XXXX/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/XXXX ruleName: XXXX-ruleset timeStamp: 2021-03-05T23:53:01+00:00 } { backendPoolName: XXXX-pool backendSettingName: XXXX-backend-setting category: ApplicationGatewayAccessLog listenerName: XXXX-ssl-listener operationName: ApplicationGatewayAccess properties: { clientIP: 1.2.3.4 clientPort: 52478 host: XXXX httpMethod: GET httpStatus: 200 httpVersion: HTTP/1.1 instanceId: appgw_1 originalHost: XXXX originalRequestUriWithArgs: /session/XXXX receivedBytes: 2087 requestQuery: XXXX requestUri: /session/XXXX sentBytes: 6081 serverResponseLatency: 0.024 serverRouted: XXXX serverStatus: 200 sslCipher: ECDHE-RSA-AES256-GCM-SHA384 sslClientCertificateFingerprint: sslClientCertificateIssuerName: sslClientVerify: NONE sslEnabled: on sslProtocol: TLSv1.2 timeTaken: 0.028 transactionId: XXXX userAgent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/87.0.4280.20 Safari/537.36 } resourceId: /SUBSCRIPTIONS/XXXX/RESOURCEGROUPS/XXXX-PRGATEWAY/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/GATEWAY-PR-AG ruleName: XXXX-ruleset timeStamp: 2021-03-05T23:53:03+00:00 } ] } x-opt-enqueued-time: 1614988459311 x-opt-offset: 29218690517200 x-opt-sequence-number: 19033991 } First I had issues with empty values, so i have resolved this. Some of these events are so big they wont render in the web gui, so its hard to tell what's going on, we have a plan to break them down using props/SEDCMD. This is my query: index=eventhub AND source="*C113*" AND body.records{}.resourceId="/subscriptions/XXXXX/resourcegroups/XXXXX-prgateway/providers/microsoft.network/applicationgateways/gateway-pr-ag" | eval _raw = replace (_raw, "\":\"\",","\":\"N/A\",") | spath | rename body.records{}.timeStamp as timeStamp_ | rename body.records{}.properties.receivedBytes as receivedBytes_ | rename body.records{}.properties.clientIP as clientIP_ | rename body.records{}.properties.clientPort as clientPort_ | rename body.records{}.properties.host as host_ | rename body.records{}.properties.httpMethod as httpMethod_ | rename body.records{}.properties.httpStatus as httpStatus_ | rename body.records{}.properties.originalHost as originalHost_ | rename body.records{}.properties.originalRequestUriWithArgs as originalRequestUriWithArgs_ | rename body.records{}.properties.timeTaken as timeTaken_ | rename body.records{}.properties.userAgent as userAgent_ | eval x = mvzip(timeStamp_, clientIP_, ";;") | mvexpand x | eval x = split(x, ";;") | eval timeStamp2 = mvindex(x, 0) | eval clientIP2 = mvindex(x, 1) | eval x = mvzip(clientIP_, receivedBytes_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval receivedBytes2 = mvindex(x, 1) | eval x = mvzip(clientIP_, clientPort_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval clientPort2 = mvindex(x, 1) | eval x = mvzip(clientIP_, host_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval host2 = mvindex(x, 1) | eval x = mvzip(clientIP_, httpMethod_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval httpMethod2 = mvindex(x, 1) | eval x = mvzip(clientIP_, httpStatus_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval httpStatus2 = mvindex(x, 1) | eval x = mvzip(clientIP_, originalHost_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval originalHost2 = mvindex(x, 1) | eval x = mvzip(clientIP_, originalRequestUriWithArgs_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval originalRequestUriWithArgs2 = mvindex(x, 1) | eval x = mvzip(clientIP_, timeTaken_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval timeTaken2 = mvindex(x, 1) | eval x = mvzip(clientIP_, userAgent_, ";;") | mvexpand x | eval x = split(x, ";;") | eval clientIP2 = mvindex(x, 0) | eval userAgent2 = mvindex(x, 1) | table timeStamp2, receivedBytes2, clientIP2, clientPort2, host2, httpMethod2, httpStatus2, originalHost2, originalRequestUriWithArgs2, timeTaken2, userAgent2 | dedup timeStamp2, receivedBytes2, clientIP2, clientPort2, host2, httpMethod2, httpStatus2, originalHost2, originalRequestUriWithArgs2, timeTaken2, userAgent2   I'm seeing exponentially more results in my table, the more fields I add. limits.conf is being mentioned for result truncation with mvexpand being used. I'm seeing user agents which, when I check an individual record manually, does not map, where an event has just one entry for a clientIP it's spawning many lines in my resulting table which appear to cover all combinations. Any help on how to query this slightly advanced JSON dataset would be good, or perhaps point out where im going wrong? p.s. we have historical data we need to process, event breaking is on the horizon but we cant really get out of this one that easily...    
I have written a generating custom command which generates just one event containing the session token and the search head it runs. This two parameters are then passed to another custom command which... See more...
I have written a generating custom command which generates just one event containing the session token and the search head it runs. This two parameters are then passed to another custom command which runs distributed on the indexers and needs them to call back to the search head. I am on Splunk Enterprise, Version: 8.0.3 with Splunk Python SDK 1.6.12, single search head and 96 node indexer cluster. The python script of the command starts with   from __future__ import absolute_import, division, print_function, unicode_literals import os,sys import time splunkhome = os.environ['SPLUNK_HOME'] sys.path.append(os.path.join(splunkhome, 'etc', 'apps', 'searchcommands_app', 'lib')) from splunklib.searchcommands import dispatch, GeneratingCommand, Configuration, Option, validators @Configuration(streaming=True, local=True) class callbackconnection(GeneratingCommand): .......     The definition of the command in commands.conf is:   [callbackconnection] filename = callbackconnection.py # python.version=python3 chunked = true     This command runs fine while using python 2.x but as soon as python.version=python3 is active in the definition of the command in commands.conf, the command gets distributed to the indexers an returns 97 results. I tried various  @configuration(....) settings, but with python 3 the command is always distributed to the indexers.
After uploading the 3 files as per the instructions, I am supposed to see my events but there is nothing on the page even when I am logged in as admin or power user. 
Hello! I am experiencing troubles with analyzing Threat Intelligence data in Enterprise Security.  When I go to Security Intelligence -> Threat Intelligence -> Threat Activity, here is Threat Activi... See more...
Hello! I am experiencing troubles with analyzing Threat Intelligence data in Enterprise Security.  When I go to Security Intelligence -> Threat Intelligence -> Threat Activity, here is Threat Activity Details panel.  It would be great if you can help me with the following questions: 1) The data for this panel it taken from Threat Intelligence datamodel (threat_activity index).  As I undersand correctly,  threat_activity index is filled with the help of all these searches (Certificates Intelligence, Email Intelligence, etc.) Can you please show me the example how to add some more data to this index, because I would like to receive the information not only about the threat match field, but also about the Data Model (from where the threat match field came from), about the type of data (rec_type_simple), src and dest ports, etc. 2) Here in Threat Activity Details panel we have a field threat_group (misp_es_domain_intel, misp_es_ip_intel). If you are aware please tell me where I can change this field (I would like to add a pipe delimited field  from lookup, so that it looks like misp_es_domain_intel | CERT). I have reviewed the full search already, which forms Threat Activity Details panel, but but here are a lot of macros and I get lost where exactly I should change the threat_group field.
I am using 2 csv files and the "inputlookup" method.  Right now I am appending one of the csv to another csv, but the line chart that is displayed out only have one line. Both of the csv files has th... See more...
I am using 2 csv files and the "inputlookup" method.  Right now I am appending one of the csv to another csv, but the line chart that is displayed out only have one line. Both of the csv files has the same fields but i want the the graph to have 2 lines, one line for the first "batch" and the second line for the second "Batch" . How do I write the codes to let the graph get sorted by the "Batch" number so there will be 2 lines? Current codes :   | from inputlookup:"5019609_V-094_9007270566.csv"| append [| from inputlookup "5019609_V-094_9007280926.csv"] |  fields  Hr  MatlTemp Current Picture of line graph :    
hello   In the search below, I need to display anything if the Hostname is not found Actually, I have the fields displayed even if the hostname doesnt exists (see attachment here https://www.cjoin... See more...
hello   In the search below, I need to display anything if the Hostname is not found Actually, I have the fields displayed even if the hostname doesnt exists (see attachment here https://www.cjoint.com/c/KCmgUBEZndg) How to display anything if the hostname doesnt exists please??   `wire` | fields AP_NAME USERNAME LAST_SEEN | lookup ap.csv NAME as AP_NAME OUTPUT Building Country Site | rename USERNAME as host | eval time = strftime(_time, "%d-%m-%y %H:%M") | stats latest(time) as "Last check date", last(AP_NAME) as "Access Point", last(Site) as "Geolocation site", last(Building) as "Geolocation building" by host | rename host as Hostname | search Hostname=X3020131 | transpose 0 column_name="Geolocation items" header_field=header | rename "row 1" as "Geolocation results"    
Hi, I have extracted 2 fields i.e. field1 & field2, while using the stats count command how do i use these extracted fields so that they show up in same table like below source     issue      _time... See more...
Hi, I have extracted 2 fields i.e. field1 & field2, while using the stats count command how do i use these extracted fields so that they show up in same table like below source     issue      _time userx       field1         12:00 usery        field1          13:00 userz         field2         14:00   Or should i use 2 different tables for  field1 and field2 below are the searches i am using index=test1 | rex field=source "\\\\(?<source>\w+)-(?<issue>\w+)\.log$" | stats count by source,_time,field1| sort -_time| fields source,_time,field1 index=test1 | rex field=source "\\\\(?<source>\w+)-(?<issue>\w+)\.log$" | stats count by source,_time,field2 | sort -_time| fields source,_time,field2      
Hi Everyone, I have the below query: index=abc ns=yu CASE(ERROR)|rex field=_raw "ERROR(?<Error_Message>.*)"|eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")|dedup Error_Message|cluster showcoun... See more...
Hi Everyone, I have the below query: index=abc ns=yu CASE(ERROR)|rex field=_raw "ERROR(?<Error_Message>.*)"|eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")|dedup Error_Message|cluster showcount=t t=0.2|table Error_Message,_time, env, pod_name,ns,cluster_count|rename app_name as APP_NAME, _time as Time, env as Environment, pod_name as Pod_Name, Error_Message as Error_Message,cluster_count as Count I am getting one error message like this: Error_Message                               Count ",                                                              5 I dont want this ", error message to be displayed. Its clubbing total 5 messages into 1. I dont want this ", error message to be displayed. Can someone guide me. when not using cluster is giving me all the 5 count but 4 messages are same(ids are different so dedup is not working). I dont want ",  message to be displayed. Can someone guide me on how to remove this.  
Hi Everyone, Can anyone please share your comments, how many hours required to upgrade the Splunk version from 7.2.1 to latest, currently our environment having 4 Splunk AWS instances (IDX1, IDX2, D... See more...
Hi Everyone, Can anyone please share your comments, how many hours required to upgrade the Splunk version from 7.2.1 to latest, currently our environment having 4 Splunk AWS instances (IDX1, IDX2, DS, SH).  
Hi Team, I have the below logs in splunk and i'm looking for query to get the time taken to compete the run by each bot for each scheduled run(Run bot deployed is the start of the bot and Run bot fi... See more...
Hi Team, I have the below logs in splunk and i'm looking for query to get the time taken to compete the run by each bot for each scheduled run(Run bot deployed is the start of the bot and Run bot finished is the end of the bot).A bot can run more than once in a day and need average run time as well.I tried using transaction command but not getting desired results. Thanks for the help 11/03/2021 22:22:20.600 STATUS=Successful,ACTIVITY AT=2021-03-11T22:22:19Z,ACTION TYPE=Run bot finished,ITEM NAME=fin_bot 11/03/2021 22:00:27.000 STATUS=Successful,ACTIVITY AT=2021-03-11T22:00:26Z,ACTION TYPE=Run bot Deployed,ITEM NAME=fin_bot 11/03/2021 15:20:04.400 STATUS=Successful,ACTIVITY AT=2021-03-11T15:20:04Z,ACTION TYPE=Run bot finished,ITEM NAME=fin_bot 11/03/2021 15:00:23.000 STATUS=Successful,ACTIVITY AT=2021-03-11T15:00:22Z,ACTION TYPE=Run bot Deployed,ITEM NAME=fin_bot 12/03/2021 04:02:15.800 STATUS=Successful,ACTIVITY AT=2021-03-12T04:02:14Z,ACTION TYPE=Run bot finished,ITEM NAME=tax_bot 12/03/2021 04:00:23.780 STATUS=Successful,ACTIVITY AT=2021-03-12T04:00:23Z,ACTION TYPE=Run bot Deployed,ITEM NAME=tax_bot
I have used the following but not producing any results. Please check it for errors.   ... | eval etime=strptime(time, "%d/%m/%Y"), sevenDaysAgo=relative_time(now(), "-7d")| where etime < sevenDays... See more...
I have used the following but not producing any results. Please check it for errors.   ... | eval etime=strptime(time, "%d/%m/%Y"), sevenDaysAgo=relative_time(now(), "-7d")| where etime < sevenDaysAgo  
Hello,   I'm trying to make time based table to check trend right away. The data is currently accumulated on daily basis, which contains storage total GB, used GB per storage.   Now, I successfu... See more...
Hello,   I'm trying to make time based table to check trend right away. The data is currently accumulated on daily basis, which contains storage total GB, used GB per storage.   Now, I successfully made timechart that I can check trend of usage(used/  total * 100) data. I made timechart viz with below command.   my base search | eval usage = round('storage.used' / 'storage.total' * 100, 2) | timechart span=1d limit=0 values(usage) by name     I want to make table that could show me usage data of today, 1month ago, 3months ago, 6months ago by storage. Like below. (Usage data must be calculated by used / total * 100 of the specific date.)   storage   used   total   usage(today)    usage(1mon)   usage(3mon)    usage(6mon) A                 30      100     30                          27                        25                          23 ...   Is there any way to make table such way?? Thank you.  
I have the latest SA-LDAP, Splunk_TA_Windows and Windows Infra apps installed.  I have sourcetype WinHostMon data coming in, but the Infrastructure app guided setup says it is not detected. I jumped... See more...
I have the latest SA-LDAP, Splunk_TA_Windows and Windows Infra apps installed.  I have sourcetype WinHostMon data coming in, but the Infrastructure app guided setup says it is not detected. I jumped over to one of the infra dashboards and all panels have "No results found" >> Host Monitoring - Operations >> Disk Free Space Distribution and opened that in search.  By simply inputting index=windows the search then works. Where does the app designate the default index it's searches refer to?
Hi Team, We are receiving warn message as below on search head clusters. "The limit has been reached for log messages in info.csv. 18 messages have not been written to info.csv. Refer to search.log... See more...
Hi Team, We are receiving warn message as below on search head clusters. "The limit has been reached for log messages in info.csv. 18 messages have not been written to info.csv. Refer to search.log for these messages or limits.conf to configure this limit." What is relevance of info.csv and how this limit breach affect Splunk search performance and search results. Thanks, Mani  
How to setup / migrate a few Web server logs into Splunk. I need to set Splunk to ingest some web server logs into Splunk. Need step by step, if someone can help please
Hi, I have installed Splunk App for Jenkins in our clustered environment and also using Heavy Forwarder for Data input. I am not seeing jenkin host in Job Insight Tab drop down but I can see the jen... See more...
Hi, I have installed Splunk App for Jenkins in our clustered environment and also using Heavy Forwarder for Data input. I am not seeing jenkin host in Job Insight Tab drop down but I can see the jenkin host in all other tabs (Build Analysis, Test Analysis etc.).  Any help on this would be greatly appreciated.  
I'm currently having trouble accessing Phantom via web gui, it's giving 500 error. I just need to retrieve the custom playbooks that we've built. Is there a way that we can retrieve the phantom play... See more...
I'm currently having trouble accessing Phantom via web gui, it's giving 500 error. I just need to retrieve the custom playbooks that we've built. Is there a way that we can retrieve the phantom playbooks via CLI, since it's on-prem server? Thanks in advance!
Looking to retrieve a listing of alerts generated by a new Application Health Rule that is set as warning (so as not to trigger critical in production).  Not sure if able to provide time window, heal... See more...
Looking to retrieve a listing of alerts generated by a new Application Health Rule that is set as warning (so as not to trigger critical in production).  Not sure if able to provide time window, health rule name and application name and get a CSV  extract.  We can then gauge how efficient rule is or if needs tweaking before enabling in production environment.  Any help would be greatly appreciated.  thx
on Splunk Cloud (8.1.2101.1) I'm encountering a warning message in my search results - trying to figure out why this is popping up. Anybody have any idea what this message means and how to resolve it... See more...
on Splunk Cloud (8.1.2101.1) I'm encountering a warning message in my search results - trying to figure out why this is popping up. Anybody have any idea what this message means and how to resolve it?     
Hello Splunk Experts, I am trying to get a dashboard built for our team. basically we have this log (sample below) some values and names changed for security purposes.  2021-03-11 14:44:52.961, Tes... See more...
Hello Splunk Experts, I am trying to get a dashboard built for our team. basically we have this log (sample below) some values and names changed for security purposes.  2021-03-11 14:44:52.961, TestName="Callout", Timestamp = "1615491891055", calling_party="2133211234", called_party="3233211234", TestDuration="1000" 2021-03-11 14:44:37.964, TestName="Callin", Timestamp = "1615491871148", calling_party="3233211234", called_party="2133211234", TestDuration="1500" 2021-03-11 14:43:22.957, TestName="Callout", Timestamp = "1615491785132", calling_party="2133211234", called_party="3233211234", TestDuration="1250" 2021-03-11 14:43:07.958, TestName="Callin", Timestamp = "1615491770646", calling_party="3233211234", called_party="2133211234", TestDuration="2000" 2021-03-11 14:42:52.961, TestName="Callout", Timestamp = "1615491764476", calling_party="2133211234", called_party="3233211234", TestDuration="1100" 2021-03-11 14:42:37.959, TestName="Callin", Timestamp = "1615491745672", calling_party="3233211234", called_party="2133211234", TestDuration="1700" so we have these test cases running all day, above is just a sample.  what we want to do is consolidate the called party and calling party numbers into 1 list lets call that field as telephone_number and then add the test duration together as total for 1day per telephone number.  then compute it how much a specific number is used every day and then chart that in a dashboard (via linechart or something else) showing may expand this to 7days or 30 days broken down by day stats.  We tried the following but it seems like after the | stats sum, I loose the _time field so the timechart at the end does not work.  index=tester_main sourcetype=test_main (calling_party=* OR called_party=*) | eval telephone_number=mvappend(calling_party, called_party) | mvexpand telephone_number | stats sum(TestDuration) as TestDuration by telephone_number | eval TestDuration='TestDuration'/1000 | eval Utilization=round(((TestDuration/86400)*100),1) | eval Utilization=if(Utilization >100, 100, Utilization) | eval Utilization=Utilization + "%" | Fields - TestDuration | timechart span=1d avg(Utilization) by telephone_number   Thank you for your time and assistance