All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In my search result, I have some arrays fields like this:     data.protoPayload.request.spec.containers{}.image `containers` field is an array which is a list of multiple dictionaries. How can I i... See more...
In my search result, I have some arrays fields like this:     data.protoPayload.request.spec.containers{}.image `containers` field is an array which is a list of multiple dictionaries. How can I include this into my alert messages? I've tried $result.data.protoPayload.request.spec.containers{}.image$ but this would not be rendered to a value. Thanks in advance,
I'm looking to build a dashboard that will take a value as an input and then have a few different panels which will pull data from external OSINT. For example, input takes a domain which then when se... See more...
I'm looking to build a dashboard that will take a value as an input and then have a few different panels which will pull data from external OSINT. For example, input takes a domain which then when searches on the dashboard it will query MXToolbox via API or something of that sort and populate a panel within the dashboard with that information.  Would this be possible to configure within the search and reporting app? Or would a custom app be needed for this functionality? I haven't seen any option for this within the Splunk dashboard, I do utilize the workflow where you can click on the domain and open a new tab to look at OSINT sites for the domain, but I want to populate this all within a dashboard to centralize the data.
Some users sending heavy, not fine tuned searches in search head cluster and this crash our search head server. How can restrict these kind of heavy searches which consume most of CPU and memory.
I want to extract dailyTime from XML and convert it into time     <globalView id="108" version="17" recordClassName="NormalizedEvent" retention="0" hourly="-1" hourlyTime="1284336038994" daily="-1... See more...
I want to extract dailyTime from XML and convert it into time     <globalView id="108" version="17" recordClassName="NormalizedEvent" retention="0" hourly="-1" hourlyTime="1284336038994" daily="-1" dailyTime="1284336038994" intervalMilliseconds="60000" writeUniqueCountersTime="0"> <criteria bop="AND"> <left> <expr> <interval serialization="custom"> <com.q1labs.ariel.Interval> <short>5000</short> <boolean>true</boolean> <short>5000</short> <boolean>true</boolean> </com.q1labs.ariel.Interval> </interval> </expr> <key class     Here is my props.conf     [XMLPARSING] KV_MODE = xml SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = <globalView\s\w*=("\d\d\d") MAX_EVENTS = 600 EXTRACT-dailyTime = ^(?:[^=\n]*=){8}"(\d+) TIME_FORMAT=%s%3N TIME_PREFIX=dailyTime= Lookahead=13 TRUNCATE = 1000 category = Custom disabled = false pulldown_type = true     but splunk is not converting it
i am trying to calculate results where > 4% of failure  is my formula correct , to set anomaly ?  | inputlookup  sample.csv | eval isananomaly = if('Failcount' / 'Totalcount' * 100 > 4 ,  1 , 0)
Hi all, Looking for some advise on the best way to document a deployment. Inherited a deployment and cannot get my head around how use cases and alerting has been set up. For example what use case... See more...
Hi all, Looking for some advise on the best way to document a deployment. Inherited a deployment and cannot get my head around how use cases and alerting has been set up. For example what use cases use what logs What would you advise Thanks Dave
I am trying to get an average for the last (x) days for a that specific day and hour.  This search lists a count for the current day. I am trying to achieve an average for a specific field for the l... See more...
I am trying to get an average for the last (x) days for a that specific day and hour.  This search lists a count for the current day. I am trying to achieve an average for a specific field for the last 5 Mondays or Tuesdays or Wednesday..etc.  So if today was Monday, the first value, AL-A at 00, would be the average of the past (x) Mondays at 00 for AL-A. index=net_auth_long | eval time_hour=strftime(_time,"%H") | chart count over channel by time_hour limit=30        
I have an angular 10 application which complies to a static html and js files, is there a way of deploying it on the Splunk Enterprise ? Any document reference would be great.    Thanks 
Hi,  I'm trying to populate a dashboard using a base search and then pulling multiple stats from those results. base search:    index=production sourcetype="audit" environ::LV   inline search:  ... See more...
Hi,  I'm trying to populate a dashboard using a base search and then pulling multiple stats from those results. base search:    index=production sourcetype="audit" environ::LV   inline search:    | appendpipe [ stats count AS Total by _time] | appendpipe [ search ("Error:" OR auditType="error") | stats count AS error by _time] | appendpipe [ where auditMicroSeconds>3 | stats count AS Over BY _time] | appendpipe [ search ("data retrieval" AND "failed") | stats count AS failed BY _time] | timechart span=30s count(Total) AS Total count(error) AS Error count(Over) AS Over    But it just doesn't work.  Hope this makes sense. TIA Steve
Hi, We've just upgraded our Splunk Enterprise Heavy Forwarder from 7.2.9.1 to 8.0.2.1 and now we cannot load any page (inputs, configurations, etc) for Add-on(s) installed on that server. Search & R... See more...
Hi, We've just upgraded our Splunk Enterprise Heavy Forwarder from 7.2.9.1 to 8.0.2.1 and now we cannot load any page (inputs, configurations, etc) for Add-on(s) installed on that server. Search & Reporting App is working. 1) Splunk add-on for ServiceNow version 6.0.0  , supports SE  version 8 2) Splunk add-on for AWS version 5.0.2 , supports SE version  8  3) The Splunk Platform Upgrade Readiness App v 2.2.1 , supports SE version  8 Clicking F12 on the hanged page would give the errors similar to this (using Splunk add-on for AWS as an example) https://our_server/en-US/static/@7888...:793/app/Splunk_TA_aws/polyfill.min.js not::ERR_ABORTED 500 (Internal Server Error)  inputs:62 Appreciate any ideas.
Hi Team, We are experiencing frequent high CPU usage on Indexers and it seems like the huge factor of it are from searches with "All time" Time filter and real time searches. With this, do we have ... See more...
Hi Team, We are experiencing frequent high CPU usage on Indexers and it seems like the huge factor of it are from searches with "All time" Time filter and real time searches. With this, do we have some steps on how to restrict user on using "All time" Time filter and real time searches? Is it related to Splunk roles? and if yes, what capabilities should be remove from them so that they will not be able to use "All time" Time filter and real time searches.
Hello fellow Splunkers, I have 2 questions regarding Splunk Smartstore's cachemanager: 1. How do I make sure that my cachemanager is large enough to hold all warm buckets for 30 days? My daily lice... See more...
Hello fellow Splunkers, I have 2 questions regarding Splunk Smartstore's cachemanager: 1. How do I make sure that my cachemanager is large enough to hold all warm buckets for 30 days? My daily license usage is about 15-20GB/day and my Splunk is hosted in the cloud. 2. What config/parameters are responsible if I want to delete data that were archived in my remote storage (S3) if their age reach 60 days since the day they got archived OR 90 days since the data creation? for example: I set my frozenTimePeriodInSecs to be 90 days. But just after 30 days the data leave cache manager and will be archived in S3. So when that same data reach their 90th day, how can cache manager freeze/delete that data if it's sitting in S3 bucket? Thanks!
I have this search thar returns the data from the last 10 days. index="raw_eg8" earliest=-10d@d latest=now() | search "evento.ORIGEM_EVENTO" = "FileService" | search "evento.STATUS" = "PROCESSADO" ... See more...
I have this search thar returns the data from the last 10 days. index="raw_eg8" earliest=-10d@d latest=now() | search "evento.ORIGEM_EVENTO" = "FileService" | search "evento.STATUS" = "PROCESSADO" | search "evento.SIGLA"="CB4" | spath "evento.SIGLA"| bucket _time span=1d | eval DayOfWeekC=strftime(_time, "%a") | eval DayOfWeekN=strftime(_time, "%m-%d-%Y") | table "evento.SIGLA", DayOfWeekC, DayOfWeekN, | stats count by "evento.SIGLA" , DayOfWeekN | eventstats sum(count) AS Total by "evento.SIGLA" | eval avg= Total/count| sort DayOfWeekN desc  And then i got this results. But, as you can see, in 10-05-2020 there is no data.  How can I return count = 0 when there is no data?  Like    evento.SIGLA DayOfWeekN count Total avg CB4 10-05-2020 0 8 8    
    Hello Im trying to run this query from Splunk API and getting this error: 'rex' is not recognized as an internal or external command, operable program or batch file.   Can you help me plea... See more...
    Hello Im trying to run this query from Splunk API and getting this error: 'rex' is not recognized as an internal or external command, operable program or batch file.   Can you help me please?   "index=wineventlog sourcetype=\"WinEventLog:Security\" (EventCode=4698 OR EventCode=4702) *ADDC*\n| where LIKE(Account_Name,\"%$\")\n| eval operation=(if(EventCode==4698,\"new\",\"update\"))\n| rex field=Message \"<Command>(?<Command>[^\\;]+)</Command>\"\n| rex field=Message \"<Arguments>(?<Arguments>[^\\;]+)</Arguments>\"\n| rex field=Message \"<UserId>(?<UserId>[^\\;]+)</UserId>\"\n| where !LIKE(Command,\"%sc.exe\")\n| where !LIKE(Command,\"%usoclient.exe\")\n| where !LIKE(Command,\"%ceipdata.exe\")\n| where !LIKE(Command,\"%OfficeC2RClient.exe\")\n| where !LIKE(Command,\"%rundll32.exe\")\n| where !LIKE(Command,\"%wermgr.exe\")\n| where !LIKE(Command,\"%MusNotification.exe\")\n| where !LIKE(Command,\"%MpCmdRun.exe\")\n| where !LIKE(Command,\"%SymErr.exe\")\n| eval user=case(UserId==\"S-1-5-18\",\"Local System\",UserId==\"S-1-5-19\",\"Local Service\",UserId==\"S-1-5-20\",\"Network Service\",true(),UserId)\n| stats values(Task_Name) as taskname values(operation) as event by _time Account_Name Command Arguments user\n| sort - _time"    
I've "Opened in Search" one of my episode review searches, then typed ctrl-shift-e to view the "expanded search string".  Doing this, I found that the event count, along with other data, was obtained... See more...
I've "Opened in Search" one of my episode review searches, then typed ctrl-shift-e to view the "expanded search string".  Doing this, I found that the event count, along with other data, was obtained via lookup on itsi_notable_group_system_lookup (among other itsi tables).  I then expanded the search string for one of my notable event searches, but find no indication that this search writes to those tables.  What step(s) am I missing between the notable event search and the episode review search?  I'm trying to determine how the episode grouping is done, which appears to happen between the NE search and the episode review search.
Hi, I'd like to know how can I apply colors on the icon according to range values on Tree View (custom viz).  The image bellow its the default visualization. I would like some thing like this o... See more...
Hi, I'd like to know how can I apply colors on the icon according to range values on Tree View (custom viz).  The image bellow its the default visualization. I would like some thing like this one:  Since I want to put some parent-child relationship, I'm using this viz. I appreciate any help. 
(.env) root@ubuntu:/opt# pyagent run -c /etc/appdynamics.cfg ./odoo-bin 2020-10-07 06:34:01,498 [INFO] appdynamics.proxy.watchdog <3550>: Watchdog already running with pid=2584 2020-10-07 06:34:01,... See more...
(.env) root@ubuntu:/opt# pyagent run -c /etc/appdynamics.cfg ./odoo-bin 2020-10-07 06:34:01,498 [INFO] appdynamics.proxy.watchdog <3550>: Watchdog already running with pid=2584 2020-10-07 06:34:01,498 [INFO] appdynamics.proxy.watchdog <3550>: Watchdog already running with pid=2584 Running as user 'root' is a security risk. 2020-10-07 13:34:01,931 3547 INFO ? odoo: Odoo version 10.0 2020-10-07 13:34:01,931 3547 INFO ? odoo: addons paths: ['/root/.local/share/Odoo/addons/10.0', u'/opt/odoo/addons', u'/opt/addons'] 2020-10-07 13:34:01,931 3547 INFO ? odoo: database: default@default:default 2020-10-07 13:34:01,939 3547 INFO ? odoo.service.server: HTTP service (werkzeug) running on ubuntu:8069 2020-10-07 13:34:12,693 3547 INFO ? odoo.addons.report.models.report: You need Wkhtmltopdf to print a pdf version of the reports. 2020-10-07 13:34:12,858 3547 INFO ? odoo.http: HTTP Configuring static files 2020-10-07 13:34:12,874 3547 INFO root odoo.modules.loading: loading 1 modules... 2020-10-07 13:34:12,895 3547 INFO root odoo.modules.loading: 1 modules loaded in 0.02s, 0 queries 2020-10-07 13:34:12,902 3547 INFO root odoo.modules.loading: loading 12 modules... 2020-10-07 13:34:12,930 3547 INFO root odoo.modules.loading: 12 modules loaded in 0.03s, 0 queries 2020-10-07 13:34:13,062 3547 INFO root odoo.modules.loading: Modules loaded. ----------------------------------------------------------------------------------------------------------- root@ubuntu:/tmp/appd/logs# cat proxyCore.2020_10_07__05_59_24.log [main] 07 Oct 2020 05:59:24,560 INFO com.singularity.proxyControl.ProxyControlEntryPoint - ProxyControl - init [main] 07 Oct 2020 05:59:24,567 INFO com.singularity.proxyControl.ProxyControlEntryPoint - comm dir set to: /tmp/appd/run/comm [main] 07 Oct 2020 05:59:24,579 INFO com.singularity.proxyControl.ZeroMQControlServer - ipcNodeBaseDir dir set to: /tmp/appd/run/comm/proxy-1334172154642585807 [main] 07 Oct 2020 05:59:24,579 INFO com.singularity.proxyControl.ZeroMQControlServer - ZeroMQControlServer - init [main] 07 Oct 2020 05:59:24,600 INFO com.singularity.proxyControl.ZeroMQControlServer - ControlReqRouterSocket started at:ipc:///tmp/appd/run/comm/0 [main] 07 Oct 2020 05:59:24,973 INFO com.singularity.proxyControl.ProxyControlEntryPoint - ProxyControl - init completed [main] 07 Oct 2020 05:59:24,973 INFO com.singularity.proxyControl.ProxyControlEntryPoint - Should register node at startup:false ------------------------------------------------------------------------------------------------------------------------ root@ubuntu:/tmp/appd/logs# cat watchdog.log 2020-10-07 05:59:22,547 [INFO] appdynamics.proxy.watchdog <2584>: Started watchdog with pid=2584 2020-10-07 05:59:22,550 [INFO] appdynamics.proxy.watchdog <2584>: Starting proxy: /usr/local/lib/python2.7/dist-packages/appdynamics_bindeps/proxy/runProxy -j /usr/local/lib/python2.7/dist-packages/appdynamics_proxysupport -d /usr/local/lib/python2.7/dist-packages/appdynamics_bindeps/proxy -r /tmp/appd/run /tmp/appd/run/comm /tmp/appd/logs 2020-10-07 05:59:23,051 [INFO] appdynamics.proxy.watchdog <2584>: Started proxy with pid=2585 2020-10-07 06:34:01,498 [INFO] appdynamics.proxy.watchdog <3550>: Watchdog already running with pid=2584 2020-10-07 06:35:31,608 [INFO] appdynamics.proxy.watchdog <3639>: Watchdog already running with pid=2584 2020-10-07 06:39:21,845 [INFO] appdynamics.proxy.watchdog <3733>: Watchdog already running with pid=2584 2020-10-07 06:39:32,777 [INFO] appdynamics.proxy.watchdog <3737>: Watchdog already running with pid=2584
Hello,   I have been banging my head on a problem. What I am trying to do is run a first query to get a list of assets, then with that list I want to update my kv store. I can do what I want in two... See more...
Hello,   I have been banging my head on a problem. What I am trying to do is run a first query to get a list of assets, then with that list I want to update my kv store. I can do what I want in two separate searches but when I combine them it does not work. I have tried using append, join, and just stringing them together but nothing works yet. My latest attempt was with join.   sourcetype="asset-info" | eval nowfield=now() | eval diff = ( nowfield-1814400) | convert timeformat="%Y-%m-%dT%H:%M:%S.%9NZ" mktime(last_found) as new_epoch | eval last_scanned=substr(new_epoch,1,10) | where last_scanned < diff | eval vuln_last_found=substr(last_found,1,10) | eval target_id = dns_name | join type=inner max=0 target_id [ | inputlookup kvstore_db | where fqdn=target_id AND state!="closed" | eval key=_key | eval state="oct7" | outputlookup kvstore_db append=True ]   The first half is the first search that gets the list of assets (target_id), then I filter on that with the kvstore lookup (kvstore_db) , fallowed by the outputlookup to actually update the (state) field with the value "oct7". This basically works as is if I run the two searches independently, but when I put them together (which is what I need) is does work. I am hoping someone can help.   Thanks, Joe 
Dashboard experts: After installing the Dashboards App (beta) on ES 7.3.7 I launch the app and end with a blank screen.  How can I troubleshoot this problem? Thanks in advance, Alex 
Hello I have a field extraction set to extract headers from .txt files. I added the props and transforms to the indexers as well as the search heads but for some reason it isnt working.   My props... See more...
Hello I have a field extraction set to extract headers from .txt files. I added the props and transforms to the indexers as well as the search heads but for some reason it isnt working.   My props on indexers and search heads: [storage:data:updated] CHARSET=UTF-8 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=null SHOULD_LINEMERGE=false disabled=false pulldown_type=true TRANSFORMS-splitfieldsv2 = storage-fieldsv2 And my transforms [storage-fieldsv2] CLEAN_KEYS = 0 REGEX = ^ *(?<Type>directory|file) +(?<AppliesTo>[^ ]+) +(?<Path>.+) +(?<Snap>[^ ]+) +(?<Hard>[^ ]+) +(?<Soft>[^ ]+) +(?<Adv>[^ ]+) +(?<Used>[^ ]+) +(?<Efficiency>\d+\.\d+\s\:\s\d+) *$   I know the extraction is right as I created by testing in my regex tester. But for some reason this isnt working in testing. The only place I havent added this is to the UF since I was testing manually before adding tio the UF and sending the data?    Any idea why this isnt working?   Hers a sample of the .txt file: Type AppliesTo Path Snap Hard Soft Adv Used Efficiency -------------------------------------------------------------------------------------------------- directory DEFAULT /ifs/data/stuff/T1000-Reports No 100.00M - 99.00M 53.00 0.00 : 1 -------------------------------------------------------------------------------------------------- Total: 1 Thanks for the assistance