All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We have created a custom events index 'abcde' in Splunk Enterprise, but sending data to it with cURL always fail. If the index is 'main', it works fine. Could you please investigate this issu... See more...
Hi, We have created a custom events index 'abcde' in Splunk Enterprise, but sending data to it with cURL always fail. If the index is 'main', it works fine. Could you please investigate this issue? Kind regards,   Moacir
Greetings! Kindly Dear Team, Kindly help on how to create a script / Alert in Splunk that will help me to know the devices that are not sending logs? I usually use query to know the device that a... See more...
Greetings! Kindly Dear Team, Kindly help on how to create a script / Alert in Splunk that will help me to know the devices that are not sending logs? I usually use query to know the device that are not sending logs but i need that we could get message alert for each device that are not sending logs. >Manually: index: xxx   earliest=1 | stats latest(_time) as _time count by host. I would like to get the alert or if there's another way I get alert all the devices that are not sending logs/receiving its logs. kindly help me? Thank you in advance.
Hi, For our AWS lambda services, we will need to send events to a Splunk Cloud instance. While installing Splunk HTTP Event Collector logging interface for JavaScript (https://github.com/splunk/splu... See more...
Hi, For our AWS lambda services, we will need to send events to a Splunk Cloud instance. While installing Splunk HTTP Event Collector logging interface for JavaScript (https://github.com/splunk/splunk-javascript-logging), we got this error:  npm install --save splunk-logging 15:30 $ cat /home/moacir/.npm/_logs/2022-07-21T13_24_33_020Z-debug-0.log 0 verbose cli [ 0 verbose cli '/home/moacir/.nvm/versions/node/v16.15.0/bin/node', 0 verbose cli '/home/moacir/.nvm/versions/node/v16.15.0/bin/npm', 0 verbose cli 'install' 0 verbose cli ] 1 info using npm@8.5.5 2 info using node@v16.15.0 3 timing npm:load:whichnode Completed in 0ms 4 timing config:load:defaults Completed in 2ms 5 timing config:load:file:/home/moacir/.nvm/versions/node/v16.15.0/lib/node_modules/npm/npmrc Completed in 1ms 6 timing config:load:builtin Completed in 1ms 7 timing config:load:cli Completed in 2ms 8 timing config:load:env Completed in 0ms 9 timing config:load:file:/home/moacir/workspace/vmx-pma-arc-npm-modules/packages/arc-lambdas/sns-listeners/SIEMPublishers/.npmrc Completed in 0ms 10 timing config:load:project Completed in 10ms 11 timing config:load:file:/home/moacir/.npmrc Completed in 1ms 12 timing config:load:user Completed in 1ms 13 timing config:load:file:/home/moacir/.nvm/versions/node/v16.15.0/etc/npmrc Completed in 0ms 14 timing config:load:global Completed in 0ms 15 timing config:load:validate Completed in 2ms 16 timing config:load:credentials Completed in 1ms 17 timing config:load:setEnvs Completed in 2ms 18 timing config:load Completed in 23ms 19 timing npm:load:configload Completed in 23ms 20 timing npm:load:setTitle Completed in 1ms 21 timing config:load:flatten Completed in 4ms 22 timing npm:load:display Completed in 5ms 23 verbose logfile /home/moacir/.npm/_logs/2022-07-21T13_24_33_020Z-debug-0.log 24 timing npm:load:logFile Completed in 8ms 25 timing npm:load:timers Completed in 0ms 26 timing npm:load:configScope Completed in 0ms 27 timing npm:load Completed in 37ms 28 timing arborist:ctor Completed in 1ms 29 silly logfile start cleaning logs, removing 1 files 30 timing idealTree:init Completed in 89ms 31 timing idealTree:userRequests Completed in 0ms 32 silly idealTree buildDeps 33 silly fetch manifest needle@^2.6.0 34 verbose shrinkwrap failed to load node_modules/.package-lock.json out of date, updated: node_modules 35 silly fetch manifest jsdoc@^3.6.7 36 silly fetch manifest jshint@^2.12.0 37 silly fetch manifest mocha@^8.4.0 38 silly fetch manifest nyc@^15.1.0 39 silly placeDep ROOT jsdoc@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^3.6.7 40 silly placeDep ROOT jshint@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^2.12.0 41 silly placeDep ROOT mocha@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^8.4.0 42 silly placeDep ROOT needle@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^2.6.0 43 silly placeDep ROOT nyc@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^15.1.0 44 timing idealTree:#root Completed in 350414ms 45 timing idealTree:node_modules/jsdoc Completed in 0ms 46 timing idealTree:node_modules/jshint Completed in 0ms 47 timing idealTree:node_modules/mocha Completed in 0ms 48 timing idealTree:node_modules/needle Completed in 0ms 49 timing idealTree:node_modules/nyc Completed in 0ms 50 timing idealTree:buildDeps Completed in 350417ms 51 timing idealTree:fixDepFlags Completed in 6ms 52 timing idealTree Completed in 350515ms 53 timing command:install Completed in 350522ms 54 verbose type system 55 verbose stack FetchError: request to http://localhost:4873/jsdoc failed, reason: connect ECONNREFUSED 127.0.0.1:4873 55 verbose stack at ClientRequest.<anonymous> (/home/moacir/.nvm/versions/node/v16.15.0/lib/node_modules/npm/node_modules/minipass-fetch/lib/index.js:130:14) 55 verbose stack at ClientRequest.emit (node:events:527:28) 55 verbose stack at Socket.socketErrorListener (node:_http_client:454:9) 55 verbose stack at Socket.emit (node:events:539:35) 55 verbose stack at emitErrorNT (node:internal/streams/destroy:157:8) 55 verbose stack at emitErrorCloseNT (node:internal/streams/destroy:122:3) 55 verbose stack at processTicksAndRejections (node:internal/process/task_queues:83:21) 56 verbose cwd /home/moacir/workspace/vmx-pma-arc-npm-modules/packages/arc-lambdas/sns-listeners/SIEMPublishers 57 verbose Linux 5.8.0-53-generic 58 verbose argv "/home/moacir/.nvm/versions/node/v16.15.0/bin/node" "/home/moacir/.nvm/versions/node/v16.15.0/bin/npm" "install" 59 verbose node v16.15.0 60 verbose npm v8.5.5 61 error code ECONNREFUSED 62 error syscall connect 63 error errno ECONNREFUSED 64 error FetchError: request to http://localhost:4873/jsdoc failed, reason: connect ECONNREFUSED 127.0.0.1:4873 64 error at ClientRequest.<anonymous> (/home/moacir/.nvm/versions/node/v16.15.0/lib/node_modules/npm/node_modules/minipass-fetch/lib/index.js:130:14) 64 error at ClientRequest.emit (node:events:527:28) 64 error at Socket.socketErrorListener (node:_http_client:454:9) 64 error at Socket.emit (node:events:539:35) 64 error at emitErrorNT (node:internal/streams/destroy:157:8) 64 error at emitErrorCloseNT (node:internal/streams/destroy:122:3) 64 error at processTicksAndRejections (node:internal/process/task_queues:83:21) { 64 error code: 'ECONNREFUSED', 64 error errno: 'ECONNREFUSED', 64 error syscall: 'connect', 64 error address: '127.0.0.1', 64 error port: 4873, 64 error type: 'system', 64 error requiredBy: '.' 64 error } 65 error 65 error If you are behind a proxy, please make sure that the 65 error 'proxy' config is set properly. See: 'npm help config' 66 verbose exit 1 67 timing npm Completed in 350839ms 68 verbose unfinished npm timer reify 1658409873316 69 verbose unfinished npm timer reify:loadTrees 1658409873320 70 verbose code 1 71 error A complete log of this run can be found in: 71 error /home/moacir/.npm/_logs/2022-07-21T13_24_33_020Z-debug-0.log I assume Splunk Cloud is supported? Requirements Node.js v4 or later. Splunk logging for Javascript is tested with Node.js v10.0 and v14.0. Splunk Enterprise 6.3.0 or later, or Splunk Cloud. Splunk logging for Javascript is tested with Splunk Enterprise 8.0 and 8.2.0. An HTTP Event Collector token from your Splunk Enterprise server. Could you please investigate this issue? Why is there a connection to 127.0.0.1:4873? Kind regards,   Moacir
In my HF, I parsed an example log from a local file and stored the parsing as a sourcetype. Then, I created an index for the events storage.  However, I cannot locate this in the Search Head after... See more...
In my HF, I parsed an example log from a local file and stored the parsing as a sourcetype. Then, I created an index for the events storage.  However, I cannot locate this in the Search Head after searching for it. I am uncertain as to whether or not the index exists in the indexer. There are two HFs, two Indexers, two Search Heads, and one Cluster Master/Deployment Server/ License Manager. The outputs.conf should be properly set, as more integrations are already operational.
I would like to build the dashboard in grid layout so that I can adjust the length and width of each panel based on my requirement instead of splunk adjusting itself. Any suggestion without install... See more...
I would like to build the dashboard in grid layout so that I can adjust the length and width of each panel based on my requirement instead of splunk adjusting itself. Any suggestion without installing any additional app.
Good morning / afternoon,  I am a cybersecurity professional who has been asked if there is a way to verify that splunk is capturing all the Windows Event logs. Currently the forwarder is configured... See more...
Good morning / afternoon,  I am a cybersecurity professional who has been asked if there is a way to verify that splunk is capturing all the Windows Event logs. Currently the forwarder is configured to send all standard Windows log data to splunk. We utilize splunk to do domain and system cybersecurity event audits. I am confident my inputs.conf and splunk forwarders are configured properly,  but essentially trust but verify that splunk is indexing the appropriate data.  I know splunk is in use world wide and specifically in SOCs around the world. If one were asked to verify the data is infact whole is there a way to verify and test other than manually generating events and parsing both splunk and windows event viewer periodically to verify splunk is infact receiving all data. Obviously this could be a configuration but with such a high level of concern around cybersecurity I would assume that orgs need to trust the data in splunk is accurate, but how can I verify? Any tips?
I've created an alert in Splunk Enterprise and used the Splunk SOAR / Phantom plugin to call the action "Run a playbook in Splunk SOAR". So far so good. Alert fires, it gets forwarded over to SOAR. S... See more...
I've created an alert in Splunk Enterprise and used the Splunk SOAR / Phantom plugin to call the action "Run a playbook in Splunk SOAR". So far so good. Alert fires, it gets forwarded over to SOAR. SOAR creates a new event and then takes the original event data and creates an artifact with the details. And then changes the tag value and creates another artifact.... and another.... and another.  Only one tag is assigned to each artifact, those being "endpoint", "filesystem", "os", "registry", "security", "success", "track_event_signatures", and "windows".  I can't find any mention of these tags in any place, starting with the original data, to the Splunk enterprise alert config, etc. So I think it's. SOAR adding additional data, but again I'm not sure how or when or why it's doing that. If each tag is necessary is there a way I can force it to add all 8 tags to an array on a single artifact? Please advise.
Hello community, I apologize in advance, I don't speak English so my writing won't be perfect. I have a problem with a comparison. I want to compare a number of acknowledgments on a number of ale... See more...
Hello community, I apologize in advance, I don't speak English so my writing won't be perfect. I have a problem with a comparison. I want to compare a number of acknowledgments on a number of alerts over a period of X minutes (example: the number of acknowledgments between 0 and 5 min VS the number of resolved alerts between 0 and 5 min). I use Splunk OnCall and I think I found the right search for it but I don't know why, I can't make a clean graph for it. I would like a bar with padding to indicate the delta between my acknowledged alerts and the total alerts. Here is what my search yields: Do you know how to force display it to show the delta in question? Best regards,
Hi folks, I have an admin user running a search and he gets results, however I have another users with a custom role with permissions to the same Index but they're unable to get the same results. ... See more...
Hi folks, I have an admin user running a search and he gets results, however I have another users with a custom role with permissions to the same Index but they're unable to get the same results. This is the an example of the search:  index=microsoft-logs ValueName="HA173MN" The custom role gets results with just the index but when adding the ValueName it doesn't get anything. It seems that the problem is related to the permissions of some knowledge object that creates that field. Is there a query or an easy way to identify which knowledge object is related to that field to assign the permissions to the custom role? I appreciate the help.
I am getting the below errors in index=_internal  for Splunk 8.2.5 the lookup is available and I am able to open it with my admin user  what can be the reason and how can be resolved  ? 07-21... See more...
I am getting the below errors in index=_internal  for Splunk 8.2.5 the lookup is available and I am able to open it with my admin user  what can be the reason and how can be resolved  ? 07-21-2022 15:11:03.840 +0300 ERROR LookupProviderFactory [100328 TcpChannelThread] - sid:summarize_1658405463.11224 The lookup table 'all_eventName' is disabled. Contact your system administrator. 07-21-2022 15:11:03.840 +0300 ERROR LookupProviderFactory [100328 TcpChannelThread] - The lookup table 'all_eventName' does not exist or is not available.
Hi All,  I have few concerns regarding buck rolling criteria my question is more focused on hot bucket. So we have 2 types of index 1.  Default  2. Local or customized index  So w... See more...
Hi All,  I have few concerns regarding buck rolling criteria my question is more focused on hot bucket. So we have 2 types of index 1.  Default  2. Local or customized index  So when I check the log retention of default index  Hot it shows 90 days  Maxbucketcreate=auto Maxdbsize=auto And we don't define anything for for local index  So while checking we fig out like for a particular index we can only have 55 days of logs in out hot bucket n when we see the log consumption for this index is nearly about 12-14gb per day  And for other local index we can see more than 104 days of logs  My concern is what retention policy splunk is following to roll the bucket for local index 1. 90 days period (which is not happening here) 2. When the hot bucket is full per day wise basis( if splunk is following this then how much data a index can store per day n how many hot bucket we have for local index n how much data each bucket can contain)  Hope im not confusing  Thanks 
Hi , I have search like below where the logs are coming from the fig1,fig4,fig5,fig6 indexes from either of the 2 hosts say host1 and host2.  So at a time 2 hosts won't send logs and only any of th... See more...
Hi , I have search like below where the logs are coming from the fig1,fig4,fig5,fig6 indexes from either of the 2 hosts say host1 and host2.  So at a time 2 hosts won't send logs and only any of the host will be sending the logs actively to fig1 index with source type as abc.     | tstats latest(_time) as latest_time WHERE (index = fig*) (NOT index IN (fig2,fig3,)) sourcetype="abc" by host index sourcetype | eval silent_in_hours=round(( now() - latest_time)/3600,2) | where silent_in_hours>20 | eval latest_time=strftime(latest_time, "%m/%d/%Y %H:%M:%S")     I want to build logic to display if any of the host1 or host2 is sending the logs then the above query should not give any o/p (should not display the silent host because we are getting the log from other host). Thanks in advance
hello I use the search below in order to timechart events on the field "BPE - Evolution du ratio de perte de paquets" It works fine but is there way to do the same thing easily please?     ... See more...
hello I use the search below in order to timechart events on the field "BPE - Evolution du ratio de perte de paquets" It works fine but is there way to do the same thing easily please?     `index` sourcetype="netproc_tcp" ezc="BPE" | fields netproc_tcp_retrans_bytes site | bin _time span=30m | stats sum(netproc_tcp_retrans_bytes) as "PaquetsPerdusBPE" by _time site | search site="$site$" | append [| search `index` sourcetype="netproc_tcp" ezc="BPE" | fields netproc_tcp_total_bytes site | bin _time span=30m | stats sum(netproc_tcp_total_bytes) as "PaquetsGlobauxBPE" by _time site ] | search site="$site$" | stats last("PaquetsPerdusBPE") as "BPE - Paquets perdus (bytes)", last("PaquetsGlobauxBPE") as "BPE - Nombre total de paquets (bytes)" by _time site | eval "BPE - Evolution du ratio de perte de paquets" = ('BPE - Paquets perdus (bytes)' / 'BPE - Nombre total de paquets (bytes)') * 100 | fields - "BPE - Paquets VMware perdus (bytes)" "BPE - Nombre total de paquets (bytes)" site  
Hello community I’m trying to figure out how to perform a search which considers events on different days. The idea is to search for an events by IP address and what I’d like to achieve is to check... See more...
Hello community I’m trying to figure out how to perform a search which considers events on different days. The idea is to search for an events by IP address and what I’d like to achieve is to check if the same IP (the same type of event) is observed in more than one specified timeframe (day/week/month). I started out with the following: <base-search> earliest="-7d@d" latest="@d" | stats count by ip date And thought I could compare if IP address occurs on more than one date. Though I suppose I’d have to loop through all the results for each IP and I could not get the SPL to work at all. Instead I figured that I could use something like | bin span=1d _time | stats count as c_ip by _time I figured I could compare content of bins somehow, though the bins are still just by “date”. I figured I’d be able to combine this with something like “eval” to get IP addresses which has events on more than one date in rage, preferably with number of events per date/bin and a total. Thi smay also need some "fillnull" or something. IP 2022-06-29 2022-07-01 2022-07-02 2022-07-12 Sum <ip1> 6 5 8 2 21 <ip2> - 5 - 4 9 Though I am not having any success. I hope I managed to articulate my idea here. If so, is what I’m aiming fore possible? Any suggestions/feedback is greatly appreciated, close enough would be a lot better than nothing Best regards // G
Hello, I have some field values which I am unable to replace with the 'replace' command in the csv file. I have Power States of servers which are Powered On and Powered Off and there are some fields ... See more...
Hello, I have some field values which I am unable to replace with the 'replace' command in the csv file. I have Power States of servers which are Powered On and Powered Off and there are some fields which have both powered on and powered off status like: server name PoweredOn server name PoweredOff server name poweredOn poweredOff server name poweredOn poweredOff suspended server name poweredOff PoweredOn poweredOff   I was able to change the field value of "poweredOn poweredOff suspended" with |replace  "*poweredOff poweredOn suspended*" with "*Suspended*" but when I change the command with |replace  "*poweredOn poweredOff*" with "*PoweredOn*" it doenst reflect. Can anyone tell me how to replace these?
How do I sort the data based on the last word after hypen data_file_hyper_v_server data_file_linux_server data_file_vmware_instance data_file_win_server Expected output data_file_hyper_v_ser... See more...
How do I sort the data based on the last word after hypen data_file_hyper_v_server data_file_linux_server data_file_vmware_instance data_file_win_server Expected output data_file_hyper_v_server data_file_linux_server data_file_win_server data_file_vmware_instance
Hi everyone, I want to create an hourly alert that logs the multiple server's CPU usage, queue length, memory usage and disk space used.  I have managed to create the following query which helps me... See more...
Hi everyone, I want to create an hourly alert that logs the multiple server's CPU usage, queue length, memory usage and disk space used.  I have managed to create the following query which helps me lists out my requirements nicely in the following image. (Note that in the image, the log is set to output every 5 minutes only, hence the null values in the image below)   index=* host=abc_server tag=performance (cpu_load_percent=* OR wait_threads_count=* OR mem_free_percent=* OR storage_free_percent=*) | eval cpu_load = 100 - PercentIdleTime | eval mem_used_percent = 100 - mem_free_percent | eval storage_used_percent = 100 - storage_free_percent | timechart eval(round(avg(cpu_load),2)) as "CPU Usage (%)", eval(round(avg(wait_threads_count), 2)) as "Queue Length", eval(round(avg(mem_used_percent), 2)) as "Memory Used (%)", eval(round(avg(storage_used_percent), 2)) as "Disk Space Used (%)"       For the next step however, I am unable to insert the host's name as another column. Is there a way where I can insert a new column for Host Name in a timechart as shown below? Host name _time CPU Usage Queue Length Memory Usage Disk Space Usage abc_server 2022-07-21 10:00:00 1.00 0.00 37.30 9.12 efg_server 2022-07-21 10:00:00 0.33 0.00 26.50 8.00 your_server 2022-07-21 10:00:00 9.21 0.00 10.30 5.00 abc_server 2022-07-21 10:01:00 1.32 0.00 37.30 9.12 efg_server 2022-07-21 10:01:00 0.89 0.00 26.50 8.00 your_server 2022-07-21 10:01:00 8.90 0.00 10.30 5.00   Thanks in advance. 
How to collect data from Netapp into splunk can someone suggest 
Hi all, I found that searches in my unix index returns events only up to the past two months for a significant number of sourcetypes (bash_history, audit, secure, sudo logs). Shouldn't the events... See more...
Hi all, I found that searches in my unix index returns events only up to the past two months for a significant number of sourcetypes (bash_history, audit, secure, sudo logs). Shouldn't the events be retained according to the retention period set using 'frozenTimePeriodInSecs'? We set the period to 365 days.   Regards, Zijian  
Hey All I have this search, and I want two results on my visualization. I want to see both "Method" and "User". What is missing here index=XXX sourcetype="XXX:XXX:message" data.logName="proj... See more...
Hey All I have this search, and I want two results on my visualization. I want to see both "Method" and "User". What is missing here index=XXX sourcetype="XXX:XXX:message" data.logName="projects/*/logs/cloudaudit.googleapis.com%2Factivity" data.resource.labels.project_id IN (*) AND ( data.resource.type IN(*) (data.protoPayload.methodName IN ("*update*","*patch*","*insert*" ) AND data.protoPayload.authorizationInfo{}.permission IN ("*update*","*insert*")) OR (data.resource.type IN(*) (data.protoPayload.methodName IN ("*create*", "*insert*") AND data.protoPayload.authorizationInfo{}.permission="*create*")) OR (data.resource.labels.project_id IN (*) AND data.resource.type IN(*) data.protoPayload.methodName IN (*delete*))) | eval name1='data.protoPayload.authorizationInfo{}.resourceAttributes.name' | eval name2='data.protoPayload.authorizationInfo{}.resource' | eval Name=if(name1="-", name2,name1) |search Name!="-" | rename data.protoPayload.methodName as Method, data.resource.type as "Resource Type", data.protoPayload.authorizationInfo{}.permission as Permission, data.timestamp as Time, data.protoPayload.authenticationInfo.principalEmail as User, data.protoPayload.requestMetadata.callerIp as "Caller IP" | timechart count by Method