All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Experts, I want to trigger an alert when a particular host for source=WinEventLog:Security is not reporting to splunk from last 1 hour. I have a list of 30 critical hosts and for those I have cr... See more...
Hi Experts, I want to trigger an alert when a particular host for source=WinEventLog:Security is not reporting to splunk from last 1 hour. I have a list of 30 critical hosts and for those I have created a csv lookup as shown below DC_Machines.csv   host               source abc              WinEventLog:Security bcd              WinEventLog:Security xyz              WinEventLog:Security What I have achieved so far | inputlookup DC_Machines.csv | join type=left host [metadata type=hosts index=os_windows index=os_windows_dc ] | fillnull recentTime | where recentTime < relative_time(now(), "-1h") | fields host,recentTime,source above gave me a host from lookup table which is not reporting at all(fine) but how about those hosts which are reporting except source=WinEventLog:Security What I want above query should only return those host which is missing only one source=WinEventLog:Security My approach might be completely wrong or may be I am missing on something .I tried to add filter on source which is not working in above logic. Any suggestions please . Thank you in advance
Hello Team,                           We are using Enterprise security in our environment and we have created correlation searches to trigger an alerts. Out of those there is one particular use case... See more...
Hello Team,                           We are using Enterprise security in our environment and we have created correlation searches to trigger an alerts. Out of those there is one particular use case which is related to Process injection, We are receiving maximum number of incidents on this due to one Here I am not giving the name of the exe file due to security concerns: I will be using it as ___.exe Shellcode Injection activity detected. Process "___e.exe" has injected itself into ___.exe, ***.exe, +++.exe, ^^^.exe like this we are receiving many incidents. I have some doubts on this: 1. What is this Process injection and why the process are getting injected to others? , 2. How could we stop this to getting injected to others in Splunk  (Or from Endgame), 3. We want to fine tune this use case, Is there any alternative to suppress these alerts from Endgame, 4. We have whitelisted some of these alerts with Source process path and Target process path, But still ___.exe still it getting triggered with new target paths. Its always injecting into some other targets. Can anyone please explain in clear cut, I am a rookie in cyber security. Not exactly getting many things. But out of those this is one which is hunting me. Thanking you in advance. 
I have apis which has params in between and trying to  match the api from csv but it doesnt show when using lookup. eg : /serviceName/api/127364/api   which is shown in access logs  i have /service... See more...
I have apis which has params in between and trying to  match the api from csv but it doesnt show when using lookup. eg : /serviceName/api/127364/api   which is shown in access logs  i have /serviceName/api/*/api in my csv file.  Any idea how this works?
Working on a fresh install of Stream into an on-prem distributed environment with a small number of endpoints. I'm not sure where to install and operate Stream from and I've seen differing instructio... See more...
Working on a fresh install of Stream into an on-prem distributed environment with a small number of endpoints. I'm not sure where to install and operate Stream from and I've seen differing instructions from 2019-present. Is the current best practice to install and operate Stream from a standalone server or install and run from the deployment server?
Hi,   I created a splunk server on AWS and using the UI I constructed an HEC to listen for some logs. I am using docker's splunk logging driver to send the logs. If I leave the config the sam... See more...
Hi,   I created a splunk server on AWS and using the UI I constructed an HEC to listen for some logs. I am using docker's splunk logging driver to send the logs. If I leave the config the same on both servers, I receive the error: "Error response from daemon: Options "https://<IP>:8088/services/collector/event/1.0": x509: certificate relies on legacy Common Name field, use SANs instead" So I tried to change splunk config so that it will work with my self signed certificate (which uses SANs). I did this by changing the inputs.conf (in which the HEC was configured, bizarrely enough under" $SPLUNK_HOME/etc/apps/search/...") to have the [http] stanza with the path of the self signed cert:       [root@machine introspection]# cat $SPLUNK_HOME/etc/apps/search/local/inputs.conf [http] serverCert = /opt/splunk/etc/auth/certs/root.pem [http://test] disabled = 0 host = <ip> sourcetype = generic_single_line token = <token>       I then moved the relevant [http] stanza to where I believe it should be, (.../apps/splunk_httpinputs/...) but this didn't help. In fact, what happened was as soon as I put this stanza in, connections via SSL to the ip of splunk with the relevant port do not complete, for example:       openssl s_client -connect <ip>:8088         I would appreciate assistance with either fixing the original SANs issue (as it's splunk logging driver on docker), or with the issue of using self-signed on HEC. Thanks!
Hi A team has asked me that they need to keep 3 months' Data. I have told them that we have limited space on the discs and it is a function of Data, not Time. (I know an index gets full and data ... See more...
Hi A team has asked me that they need to keep 3 months' Data. I have told them that we have limited space on the discs and it is a function of Data, not Time. (I know an index gets full and data will drop off at the back) But how do I know how far my data goes back, so I can judge if I need to get more Disk space, or increase the size of the index. Below is a screen show taken from 1 of the indexes, however, I don't believe the reading. I know that I keep up to 3-6 months of data but not years. So  If I try running a search like this I don't think it will end. Also, I don't believe the data in 2017, I think somehow it got back dated Anu help would be great - cheers
We get a weekly ingest of a data set for our vulnerability management. Each line contains a unique value matching a vulnerability with a server I want to be able to report on: a. how many new vul... See more...
We get a weekly ingest of a data set for our vulnerability management. Each line contains a unique value matching a vulnerability with a server I want to be able to report on: a. how many new vulnerabilities are in this weeks report compared to last week and b. how many vulnerabilities have been fixed (so are not reported) in this weeks list compared to last week I'm looking for splunk to tell me whats new and whats missing week by week but also track these over the long term.  Cant seem to get any meaningful results with a 'set diff' search   Any help gratefully received!!
I'm looking to find a way to have multiple nested filters in a dashboard. Currently I'm creating attendance charts for our estaff and we require some more granularity due to the amount of unique data... See more...
I'm looking to find a way to have multiple nested filters in a dashboard. Currently I'm creating attendance charts for our estaff and we require some more granularity due to the amount of unique data there is. The goal is to be able to isolate 3 values. Function, Hierarchy, and Cost Center. Currently there are 3 functions, 28 hierarchies, and nearly 200 cost centers. With the nested filters I hope to be able to choose a function which would only populate hierarchies under that function. Then when choosing a hierarchy only populating the cost centers under it.    The issue I'm running into is that I'm able to create a two filters but once I add the 3rd filter the entire search breaks and the charts no longer update. 
Hi, We have created a custom events index 'abcde' in Splunk Enterprise, but sending data to it with cURL always fail. If the index is 'main', it works fine. Could you please investigate this issu... See more...
Hi, We have created a custom events index 'abcde' in Splunk Enterprise, but sending data to it with cURL always fail. If the index is 'main', it works fine. Could you please investigate this issue? Kind regards,   Moacir
Greetings! Kindly Dear Team, Kindly help on how to create a script / Alert in Splunk that will help me to know the devices that are not sending logs? I usually use query to know the device that a... See more...
Greetings! Kindly Dear Team, Kindly help on how to create a script / Alert in Splunk that will help me to know the devices that are not sending logs? I usually use query to know the device that are not sending logs but i need that we could get message alert for each device that are not sending logs. >Manually: index: xxx   earliest=1 | stats latest(_time) as _time count by host. I would like to get the alert or if there's another way I get alert all the devices that are not sending logs/receiving its logs. kindly help me? Thank you in advance.
Hi, For our AWS lambda services, we will need to send events to a Splunk Cloud instance. While installing Splunk HTTP Event Collector logging interface for JavaScript (https://github.com/splunk/splu... See more...
Hi, For our AWS lambda services, we will need to send events to a Splunk Cloud instance. While installing Splunk HTTP Event Collector logging interface for JavaScript (https://github.com/splunk/splunk-javascript-logging), we got this error:  npm install --save splunk-logging 15:30 $ cat /home/moacir/.npm/_logs/2022-07-21T13_24_33_020Z-debug-0.log 0 verbose cli [ 0 verbose cli '/home/moacir/.nvm/versions/node/v16.15.0/bin/node', 0 verbose cli '/home/moacir/.nvm/versions/node/v16.15.0/bin/npm', 0 verbose cli 'install' 0 verbose cli ] 1 info using npm@8.5.5 2 info using node@v16.15.0 3 timing npm:load:whichnode Completed in 0ms 4 timing config:load:defaults Completed in 2ms 5 timing config:load:file:/home/moacir/.nvm/versions/node/v16.15.0/lib/node_modules/npm/npmrc Completed in 1ms 6 timing config:load:builtin Completed in 1ms 7 timing config:load:cli Completed in 2ms 8 timing config:load:env Completed in 0ms 9 timing config:load:file:/home/moacir/workspace/vmx-pma-arc-npm-modules/packages/arc-lambdas/sns-listeners/SIEMPublishers/.npmrc Completed in 0ms 10 timing config:load:project Completed in 10ms 11 timing config:load:file:/home/moacir/.npmrc Completed in 1ms 12 timing config:load:user Completed in 1ms 13 timing config:load:file:/home/moacir/.nvm/versions/node/v16.15.0/etc/npmrc Completed in 0ms 14 timing config:load:global Completed in 0ms 15 timing config:load:validate Completed in 2ms 16 timing config:load:credentials Completed in 1ms 17 timing config:load:setEnvs Completed in 2ms 18 timing config:load Completed in 23ms 19 timing npm:load:configload Completed in 23ms 20 timing npm:load:setTitle Completed in 1ms 21 timing config:load:flatten Completed in 4ms 22 timing npm:load:display Completed in 5ms 23 verbose logfile /home/moacir/.npm/_logs/2022-07-21T13_24_33_020Z-debug-0.log 24 timing npm:load:logFile Completed in 8ms 25 timing npm:load:timers Completed in 0ms 26 timing npm:load:configScope Completed in 0ms 27 timing npm:load Completed in 37ms 28 timing arborist:ctor Completed in 1ms 29 silly logfile start cleaning logs, removing 1 files 30 timing idealTree:init Completed in 89ms 31 timing idealTree:userRequests Completed in 0ms 32 silly idealTree buildDeps 33 silly fetch manifest needle@^2.6.0 34 verbose shrinkwrap failed to load node_modules/.package-lock.json out of date, updated: node_modules 35 silly fetch manifest jsdoc@^3.6.7 36 silly fetch manifest jshint@^2.12.0 37 silly fetch manifest mocha@^8.4.0 38 silly fetch manifest nyc@^15.1.0 39 silly placeDep ROOT jsdoc@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^3.6.7 40 silly placeDep ROOT jshint@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^2.12.0 41 silly placeDep ROOT mocha@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^8.4.0 42 silly placeDep ROOT needle@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^2.6.0 43 silly placeDep ROOT nyc@ OK for: @verimatrix/arc-lambdas-siem-publishers-sns-listener@16.0.0 want: ^15.1.0 44 timing idealTree:#root Completed in 350414ms 45 timing idealTree:node_modules/jsdoc Completed in 0ms 46 timing idealTree:node_modules/jshint Completed in 0ms 47 timing idealTree:node_modules/mocha Completed in 0ms 48 timing idealTree:node_modules/needle Completed in 0ms 49 timing idealTree:node_modules/nyc Completed in 0ms 50 timing idealTree:buildDeps Completed in 350417ms 51 timing idealTree:fixDepFlags Completed in 6ms 52 timing idealTree Completed in 350515ms 53 timing command:install Completed in 350522ms 54 verbose type system 55 verbose stack FetchError: request to http://localhost:4873/jsdoc failed, reason: connect ECONNREFUSED 127.0.0.1:4873 55 verbose stack at ClientRequest.<anonymous> (/home/moacir/.nvm/versions/node/v16.15.0/lib/node_modules/npm/node_modules/minipass-fetch/lib/index.js:130:14) 55 verbose stack at ClientRequest.emit (node:events:527:28) 55 verbose stack at Socket.socketErrorListener (node:_http_client:454:9) 55 verbose stack at Socket.emit (node:events:539:35) 55 verbose stack at emitErrorNT (node:internal/streams/destroy:157:8) 55 verbose stack at emitErrorCloseNT (node:internal/streams/destroy:122:3) 55 verbose stack at processTicksAndRejections (node:internal/process/task_queues:83:21) 56 verbose cwd /home/moacir/workspace/vmx-pma-arc-npm-modules/packages/arc-lambdas/sns-listeners/SIEMPublishers 57 verbose Linux 5.8.0-53-generic 58 verbose argv "/home/moacir/.nvm/versions/node/v16.15.0/bin/node" "/home/moacir/.nvm/versions/node/v16.15.0/bin/npm" "install" 59 verbose node v16.15.0 60 verbose npm v8.5.5 61 error code ECONNREFUSED 62 error syscall connect 63 error errno ECONNREFUSED 64 error FetchError: request to http://localhost:4873/jsdoc failed, reason: connect ECONNREFUSED 127.0.0.1:4873 64 error at ClientRequest.<anonymous> (/home/moacir/.nvm/versions/node/v16.15.0/lib/node_modules/npm/node_modules/minipass-fetch/lib/index.js:130:14) 64 error at ClientRequest.emit (node:events:527:28) 64 error at Socket.socketErrorListener (node:_http_client:454:9) 64 error at Socket.emit (node:events:539:35) 64 error at emitErrorNT (node:internal/streams/destroy:157:8) 64 error at emitErrorCloseNT (node:internal/streams/destroy:122:3) 64 error at processTicksAndRejections (node:internal/process/task_queues:83:21) { 64 error code: 'ECONNREFUSED', 64 error errno: 'ECONNREFUSED', 64 error syscall: 'connect', 64 error address: '127.0.0.1', 64 error port: 4873, 64 error type: 'system', 64 error requiredBy: '.' 64 error } 65 error 65 error If you are behind a proxy, please make sure that the 65 error 'proxy' config is set properly. See: 'npm help config' 66 verbose exit 1 67 timing npm Completed in 350839ms 68 verbose unfinished npm timer reify 1658409873316 69 verbose unfinished npm timer reify:loadTrees 1658409873320 70 verbose code 1 71 error A complete log of this run can be found in: 71 error /home/moacir/.npm/_logs/2022-07-21T13_24_33_020Z-debug-0.log I assume Splunk Cloud is supported? Requirements Node.js v4 or later. Splunk logging for Javascript is tested with Node.js v10.0 and v14.0. Splunk Enterprise 6.3.0 or later, or Splunk Cloud. Splunk logging for Javascript is tested with Splunk Enterprise 8.0 and 8.2.0. An HTTP Event Collector token from your Splunk Enterprise server. Could you please investigate this issue? Why is there a connection to 127.0.0.1:4873? Kind regards,   Moacir
In my HF, I parsed an example log from a local file and stored the parsing as a sourcetype. Then, I created an index for the events storage.  However, I cannot locate this in the Search Head after... See more...
In my HF, I parsed an example log from a local file and stored the parsing as a sourcetype. Then, I created an index for the events storage.  However, I cannot locate this in the Search Head after searching for it. I am uncertain as to whether or not the index exists in the indexer. There are two HFs, two Indexers, two Search Heads, and one Cluster Master/Deployment Server/ License Manager. The outputs.conf should be properly set, as more integrations are already operational.
I would like to build the dashboard in grid layout so that I can adjust the length and width of each panel based on my requirement instead of splunk adjusting itself. Any suggestion without install... See more...
I would like to build the dashboard in grid layout so that I can adjust the length and width of each panel based on my requirement instead of splunk adjusting itself. Any suggestion without installing any additional app.
Good morning / afternoon,  I am a cybersecurity professional who has been asked if there is a way to verify that splunk is capturing all the Windows Event logs. Currently the forwarder is configured... See more...
Good morning / afternoon,  I am a cybersecurity professional who has been asked if there is a way to verify that splunk is capturing all the Windows Event logs. Currently the forwarder is configured to send all standard Windows log data to splunk. We utilize splunk to do domain and system cybersecurity event audits. I am confident my inputs.conf and splunk forwarders are configured properly,  but essentially trust but verify that splunk is indexing the appropriate data.  I know splunk is in use world wide and specifically in SOCs around the world. If one were asked to verify the data is infact whole is there a way to verify and test other than manually generating events and parsing both splunk and windows event viewer periodically to verify splunk is infact receiving all data. Obviously this could be a configuration but with such a high level of concern around cybersecurity I would assume that orgs need to trust the data in splunk is accurate, but how can I verify? Any tips?
I've created an alert in Splunk Enterprise and used the Splunk SOAR / Phantom plugin to call the action "Run a playbook in Splunk SOAR". So far so good. Alert fires, it gets forwarded over to SOAR. S... See more...
I've created an alert in Splunk Enterprise and used the Splunk SOAR / Phantom plugin to call the action "Run a playbook in Splunk SOAR". So far so good. Alert fires, it gets forwarded over to SOAR. SOAR creates a new event and then takes the original event data and creates an artifact with the details. And then changes the tag value and creates another artifact.... and another.... and another.  Only one tag is assigned to each artifact, those being "endpoint", "filesystem", "os", "registry", "security", "success", "track_event_signatures", and "windows".  I can't find any mention of these tags in any place, starting with the original data, to the Splunk enterprise alert config, etc. So I think it's. SOAR adding additional data, but again I'm not sure how or when or why it's doing that. If each tag is necessary is there a way I can force it to add all 8 tags to an array on a single artifact? Please advise.
Hello community, I apologize in advance, I don't speak English so my writing won't be perfect. I have a problem with a comparison. I want to compare a number of acknowledgments on a number of ale... See more...
Hello community, I apologize in advance, I don't speak English so my writing won't be perfect. I have a problem with a comparison. I want to compare a number of acknowledgments on a number of alerts over a period of X minutes (example: the number of acknowledgments between 0 and 5 min VS the number of resolved alerts between 0 and 5 min). I use Splunk OnCall and I think I found the right search for it but I don't know why, I can't make a clean graph for it. I would like a bar with padding to indicate the delta between my acknowledged alerts and the total alerts. Here is what my search yields: Do you know how to force display it to show the delta in question? Best regards,
Hi folks, I have an admin user running a search and he gets results, however I have another users with a custom role with permissions to the same Index but they're unable to get the same results. ... See more...
Hi folks, I have an admin user running a search and he gets results, however I have another users with a custom role with permissions to the same Index but they're unable to get the same results. This is the an example of the search:  index=microsoft-logs ValueName="HA173MN" The custom role gets results with just the index but when adding the ValueName it doesn't get anything. It seems that the problem is related to the permissions of some knowledge object that creates that field. Is there a query or an easy way to identify which knowledge object is related to that field to assign the permissions to the custom role? I appreciate the help.
I am getting the below errors in index=_internal  for Splunk 8.2.5 the lookup is available and I am able to open it with my admin user  what can be the reason and how can be resolved  ? 07-21... See more...
I am getting the below errors in index=_internal  for Splunk 8.2.5 the lookup is available and I am able to open it with my admin user  what can be the reason and how can be resolved  ? 07-21-2022 15:11:03.840 +0300 ERROR LookupProviderFactory [100328 TcpChannelThread] - sid:summarize_1658405463.11224 The lookup table 'all_eventName' is disabled. Contact your system administrator. 07-21-2022 15:11:03.840 +0300 ERROR LookupProviderFactory [100328 TcpChannelThread] - The lookup table 'all_eventName' does not exist or is not available.
Hi All,  I have few concerns regarding buck rolling criteria my question is more focused on hot bucket. So we have 2 types of index 1.  Default  2. Local or customized index  So w... See more...
Hi All,  I have few concerns regarding buck rolling criteria my question is more focused on hot bucket. So we have 2 types of index 1.  Default  2. Local or customized index  So when I check the log retention of default index  Hot it shows 90 days  Maxbucketcreate=auto Maxdbsize=auto And we don't define anything for for local index  So while checking we fig out like for a particular index we can only have 55 days of logs in out hot bucket n when we see the log consumption for this index is nearly about 12-14gb per day  And for other local index we can see more than 104 days of logs  My concern is what retention policy splunk is following to roll the bucket for local index 1. 90 days period (which is not happening here) 2. When the hot bucket is full per day wise basis( if splunk is following this then how much data a index can store per day n how many hot bucket we have for local index n how much data each bucket can contain)  Hope im not confusing  Thanks 
Hi , I have search like below where the logs are coming from the fig1,fig4,fig5,fig6 indexes from either of the 2 hosts say host1 and host2.  So at a time 2 hosts won't send logs and only any of th... See more...
Hi , I have search like below where the logs are coming from the fig1,fig4,fig5,fig6 indexes from either of the 2 hosts say host1 and host2.  So at a time 2 hosts won't send logs and only any of the host will be sending the logs actively to fig1 index with source type as abc.     | tstats latest(_time) as latest_time WHERE (index = fig*) (NOT index IN (fig2,fig3,)) sourcetype="abc" by host index sourcetype | eval silent_in_hours=round(( now() - latest_time)/3600,2) | where silent_in_hours>20 | eval latest_time=strftime(latest_time, "%m/%d/%Y %H:%M:%S")     I want to build logic to display if any of the host1 or host2 is sending the logs then the above query should not give any o/p (should not display the silent host because we are getting the log from other host). Thanks in advance