All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i want to check app the add-ons or apps in my environment along with the version we are using. doing it manually by going on every forwarder and checking it is a pain , is there any command which can... See more...
i want to check app the add-ons or apps in my environment along with the version we are using. doing it manually by going on every forwarder and checking it is a pain , is there any command which can give me this data in 1 go will be really helpful. Thanks in advance
I have a query that produce a sample of the results below. DateTime Namespace Type 18-May-20 sys-uat Compliance 5-May-20 emit-ssg-oss Compliance 5-May-20 sast-prd Vulnerability ... See more...
I have a query that produce a sample of the results below. DateTime Namespace Type 18-May-20 sys-uat Compliance 5-May-20 emit-ssg-oss Compliance 5-May-20 sast-prd Vulnerability 5-Jun-20 portal-api Compliance 8-Jun-20 ssc-acc Compliance I would like to count the number Type each Namespace has over a period of time. The end result visualization chart should look like this. This would display the count of each Namespace (grouped by day or month) based on the time picker. For eample, sys-uat has a total 20 count Types for May and 9 count Types for June. This way, I can compare the counts each Namespace has side by side. If I do this, | timechart span=1month count by Namespace  the Namespace is split between the months. I want each Namespace to be displayed side by side. For example, the blue bars should be side by side instead of being split. Is there a way to do this? Thank you.
i need to create Rank based on Events that will occur dynamically. i've tried this but not able to do index="abc" source="bcd" |eval ComputerName=upper(ComputerName) |join ComputerName [|savedsearch... See more...
i need to create Rank based on Events that will occur dynamically. i've tried this but not able to do index="abc" source="bcd" |eval ComputerName=upper(ComputerName) |join ComputerName [|savedsearch Computers_By_Product productName="DELL"] | eval title = replace(title,"{","") | eval title = replace(title,"}","") | rename title as signature | join type=left signature [search index="abc" source="dce" earliest=1 latest=now() | stats dc(id) as IDs by signature] | eventstats dc(DateTime) as issueCount by ComputerName | eventstats dc(ID) as fixCount by ComputerName |sort 0 - issueCount |streamstats current=f window=1 values(issueCount) as Prev|eval Rank_filled=if(prev=issueCount,0,1) | accum Rank_filled|table ComputerName issueCount Rank_filled i need rank like  issueCount Rank 2 1 2 1 1 2 1 2 thanks
I keep getting these two errors when restarting the forwarder: ERROR TailReader - File will not be read, seekptr checksum did not match File will not be read, is too small to match seekptr checksum... See more...
I keep getting these two errors when restarting the forwarder: ERROR TailReader - File will not be read, seekptr checksum did not match File will not be read, is too small to match seekptr checksum I'm consuming multiple logs in the one stanza, and all of the logs either have one or both of these error messages. I am at a complete loss on how to fix this issue and its beginning to greatly frustrate me. Here is my current inputs.conf for reference:  
I am using inputlookup in a search query and search key in table (test.csv) has wildcard as shown below. FILENAME EMAIL abc* test1@a.com xyz* test2@a.com   The query should match fn... See more...
I am using inputlookup in a search query and search key in table (test.csv) has wildcard as shown below. FILENAME EMAIL abc* test1@a.com xyz* test2@a.com   The query should match fname in log file with FILENAME from lookup table and if there's a match then result should be something like: FILENAME EMAIL cnt abc* test1@a.com 2 xyz* test2@a.com 0   Instead my query output is: fname EMAIL count abc* test1@a.com 0 abc123.txt   1 abc.dat   1 xyz* test2@a.com 0   This is my query: index=* host=* source="/bustools/*" | rex max_match=100 "\d+\d+\s(?<ts>.*)\s(?<directory>\/.*)\/(?<fname>.*)" | dedup fname | search [ | inputlookup test.csv | rename FILENAME AS fname | fields fname] | stats count as occur by fname | append [ inputlookup test.csv | rename FILENAME AS fname | fields fname] | fillnull occur | stats sum(occur) as cnt BY fname | join type=left fname [ | inputlookup test.csv | rename FILENAME AS fname] | table fname EMAIL cnt Any help would be appreciated.
Hi All, Below is my dashboard, I have two panels- want 2 different type of border for each to segregate .how can i do that . I wanted to use id of the element to fetch the particular element . But ... See more...
Hi All, Below is my dashboard, I have two panels- want 2 different type of border for each to segregate .how can i do that . I wanted to use id of the element to fetch the particular element . But its not working .   .dashboard-row .dashboard-panel dashboard-element #element4{ border-style: solid; border-color: #92a8d1; border-width: thick; }
Hi ninjas,   I am using DB Connect 2.x for getting data from DB to Splunk. There are some sensitive fields which are not allowed to show in clear text, hence I had to hash/encrypt the data before i... See more...
Hi ninjas,   I am using DB Connect 2.x for getting data from DB to Splunk. There are some sensitive fields which are not allowed to show in clear text, hence I had to hash/encrypt the data before indexing in Splunk.   I tried to hash/encrypt the fields in SQL, but it turned out very high CPU consumption in DB. I solved this issue by modified DB Connect 2.x code (in Python) to encrypt field data before sending to event stream. This also helped to scale out the computation to a cluster of heavy forwarders. But with DB Connect 3.x I am unable to do that.   Are there any solution to hash/encrypt the field data before indexing to Splunk using DB Connect 3.x ? Something like adding a custom handler to process the data/result set from DB before DBX 3.x sending the events to HEC. I am going to upgrade to DBX 3.x because of its performance and stability.  I found the same requirement in this post but no solution yet (https://answers.splunk.com/answers/488681/can-splunk-db-connect-reformat-data-before-indexin.html)   Thank you very much. Lang    
Dear Guys,   This is about applying shcluster-bundle, fail with Error       splunk apply shcluster-bundle -target https://xxx.xxx.xxx.62:8089 -auth admin:”<password>”   it makes the below messa... See more...
Dear Guys,   This is about applying shcluster-bundle, fail with Error       splunk apply shcluster-bundle -target https://xxx.xxx.xxx.62:8089 -auth admin:”<password>”   it makes the below messages. Error while deploying apps to first member, aborting apps deployment to all members: Error while fetching apps baseline on target=https://xxx.xxx.xxx.62:8089 Non-200/201 status_code=401; {"messages":[{"type":"WARN","text":"call not properly authenticated"}]} So then I've checked and tried the below contents. The pass4SymmKey under shclustering stanza is the same hash on all 3 members and the deployer The pass4SymmKey under shclustering stanza is the same hash on all 3 members (without deployer) Bundle size is no problem All server uses same admin/password for web Each search head has uniq guid(/opt/splunk/etc/instance.cfg)   In addition, when it commanded: SearchHead Member #1 - $SPLUNK_HOME/var/log/splunk/splunkd.log **-**-**** **:**:37.454 +0900 ERROR DigestProcessor - Failed signature match **-**-**** **:**:37.454 +0900 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/shcluster/member/members?output_mode=json&count=-1 **-**-**** **:**:37.457 +0900 ERROR DigestProcessor - Failed signature match **-**-**** **:**:37.457 +0900 ERROR LMHttpUtil - Failed to verify HMAC signature, uri: /services/apps/local?output_mode=json&count=-1&show_hidden=1   SearchHead Deployer - $SPLUNK_HOME/var/log/splunk/splunkd.log **-**-**** **:**:32.809 +0900 INFO TcpOutputProc - After randomization, current is first in the list. Swapping with last item **-**-**** **:**:32.812 +0900 INFO TcpOutputProc - Connected to idx= xxx.xxx.xxx.64:9997, pset=0, reuse=0. **-**-**** **:**:37.456 +0900 WARN AppsDeployHandler - Error while fetching members from uri=https://xxx.xxx.xxx.62:8089: Non-200 status_code=401: Unauthorized **-**-**** **:**:37.459 +0900 WARN AppsDeployHandler - Error while deploying apps to first member, aborting apps deployment to all members: Error while fetching apps baseline on target=https://xxx.xxx.xxx.62:8089 Non-200/201 status_code=401; {"messages":[{"type":"WARN","text":"call not properly authenticated"}]}   Please let me know your tips
Hi there guys,   We're having some problems whit the SH and the IDX. We have 5 IDX [3 of them not in cluster, the others are] and when we search only shows 3 on them [the ones out of the cluster], ... See more...
Hi there guys,   We're having some problems whit the SH and the IDX. We have 5 IDX [3 of them not in cluster, the others are] and when we search only shows 3 on them [the ones out of the cluster], but when we search with the splunk_server=* it shows the five of them. We don't know what is going on, we used another SH and when we put the IDX as search peers it works. We have tried everything but we can't find the answer.   I really hope you can help us.   Thanks a lot.
Hi All, I have a requirement where in i need to capture the application logs  generated inside the containers (not the container logs) to splunk.   Please help if there are any solutions available.... See more...
Hi All, I have a requirement where in i need to capture the application logs  generated inside the containers (not the container logs) to splunk.   Please help if there are any solutions available.    
Hi, I want to extract the timestamp from my log and make it the official _time in Splunk and I'm having difficulties doing that. I'd like to keep the date current as there is no date in the log file... See more...
Hi, I want to extract the timestamp from my log and make it the official _time in Splunk and I'm having difficulties doing that. I'd like to keep the date current as there is no date in the log files. This is an example of what a log looks like with the Splunk time: And this is my props.conf: I just want the time in the logs to match the time in Splunk, and I am not sure what I am doing wrong. Please help
Trying to extract Dimensions out of Query, but it is taking 1500 plus steps due to which I am getting limits.conf error.     [{(, ](?<Dimensions>[a-z0-9A-Z\[\.\]+[\-\ \_]*)[\.&\[] SELECT { [Me... See more...
Trying to extract Dimensions out of Query, but it is taking 1500 plus steps due to which I am getting limits.conf error.     [{(, ](?<Dimensions>[a-z0-9A-Z\[\.\]+[\-\ \_]*)[\.&\[] SELECT { [Measures].[IMS Org Count] } ON COLUMNS, NONEMPTY ( { [End User].[End User ID].[End User ID].MEMBERS * [End User].[End User Name].[End User Name].MEMBERS * [Product].[PMBU Short Desc].[PMBU Short Desc].MEMBERS * [Product].[PMBU Medium Desc].[PMBU Medium Desc].MEMBERS * [IMS Org].[Unique Id].[Unique Id].MEMBERS * [IMS Org].[IMS Org Id].[IMS Org Id].MEMBERS * [IMS Org].[MC Org Name].[MC Org Name].MEMBERS * [Is Active Account].[Is Active Account].[Is Active Account].MEMBERS* [Billing End User].[End User ID].[End User ID].MEMBERS } , [Measures].[IMS Org Count] ) ON ROWS FROM ( SELECT CASE '1' WHEN "5" THEN [Account Manager].[AM Org Lead Ldap].[AM Org Lead Ldap].[xxxx] WHEN "4" THEN [Account Manager].[Regional Manager Ldap].[Regional Manager Ldap].[xxx] WHEN "3" THEN [Account Manager].[AM Manager Ldap].[AM Manager Ldap].[xxx] WHEN "2" THEN [Account Manager].[AM Lead Ldap].[AM Lead Ldap].[xxx] WHEN "1" THEN [Account Manager].[AM Ldap].[AM Ldap].[xxxx] END ON 0 FROM XX )     https://regex101.com/r/HEdUhy/1/
Hi, I am expecting an event at 7:15 and I want write a search that should give me results as below:   If event arrived at 7:15 — result -1  If event not arrived at 7:15  result - 2 if event does... See more...
Hi, I am expecting an event at 7:15 and I want write a search that should give me results as below:   If event arrived at 7:15 — result -1  If event not arrived at 7:15  result - 2 if event doesn’t arrive 30mins after 7:15 - result 3  the moment I received the event result-1   thank you for your help in advance.
Hi all, Have a distributed Splunk Enterprise deployment. I am trying to filter incoming registry events to remove wasteful data on all of my Forwarders. This is the Stanza in question from $SPLUNK_H... See more...
Hi all, Have a distributed Splunk Enterprise deployment. I am trying to filter incoming registry events to remove wasteful data on all of my Forwarders. This is the Stanza in question from $SPLUNK_HOME\etc\deployment_apps\Splunk_TA_Windows\local\inputs.conf. This app is deployed to all appropriate Forwarders and have run 'reload deploy-server' after saving changes. [WinRegMon://hklm] disabled = 0 hive = \\REGISTRY\\MACHINE\\.* proc = ^(?:(?!first\.exe|Second\.Punctuated\.exe).)*$ type = create|delete index = windows-mon  When watching incoming data, the regex isn't working, events containing these exe names in process_image are still present. I have checked using regex101.com using example data below, it works perfectly. 06/11/2020 08:51:57.983 event_status="(0)The operation completed successfully." pid=1996 process_image="c:\Program Files\Folder\first.exe" registry_type="CreateKey" key_path="HKLM\software\folder\classifiedapplications" data_type="REG_NONE" data="" 06/11/2020 08:53:18.187 event_status="(0)The operation completed successfully." pid=2084 process_image="c:\Program Files (x86)\Folder\Second.Punctuated.exe" registry_type="CreateKey" key_path="HKLM\software\microsoft\enterprisecertificates\trust\ctls" data_type="REG_NONE" data=""  What am I doing wrong?
Splunk Enterprise List of jobs in Activity >> Triggered Alerts are visible and the results also can be see by other users who does not have privilege.  Anybody observed this and controlled this for... See more...
Splunk Enterprise List of jobs in Activity >> Triggered Alerts are visible and the results also can be see by other users who does not have privilege.  Anybody observed this and controlled this for a given user/role?  
I'm getting the following error while trying to save a correlation search as a user with the ess_admin role: There was an error saving the correlation search: User 'local_ess_admin' with roles { ess... See more...
I'm getting the following error while trying to save a correlation search as a user with the ess_admin role: There was an error saving the correlation search: User 'local_ess_admin' with roles { ess_admin, ess_analyst, ess_user, local_ess_admin, power, user } cannot write: /nobody/SplunkEnterpriseSecuritySuite/savedsearches/Threat - test2 - Rule { read : [ * ], write : [ admin ] }, export: global, owner: admin, removable: no, modtime: 1591818982.977029000 The ess_admin role should by default be allowed to edit correlation searches, and the role does have the "edit_correlationsearches" capability. Is there any other capability that should be enabled in order for this to work?  
Hello All!      I have a .csv file that contains a list of about 100 or so hash values that I'd like to create an alert on so that I'll know if they appear on the network.  I have an inputlookup tha... See more...
Hello All!      I have a .csv file that contains a list of about 100 or so hash values that I'd like to create an alert on so that I'll know if they appear on the network.  I have an inputlookup that I created called "hashes.csv" that contains the values I'd like to monitor.  Does anyone have SPL that I would need in order to do this?  Your help is very much appreciated!  Thanks.  
We are having issues with Kubernetes containers spamming Splunk with 100's of gb's of logs sometimes. We would like to put together a search to track containers that have a sudden log spike and gener... See more...
We are having issues with Kubernetes containers spamming Splunk with 100's of gb's of logs sometimes. We would like to put together a search to track containers that have a sudden log spike and generate an alert. More specifically 1) look at the average rate of events 2) find the peak 3)decide a percentage of that peak 4) and then trigger an alert when a container has breached the threshold. The closest I have come up with is the below search, which has an average rate and standard deviation of that rate by hour   index="apps" sourcetype="kube" | bucket _time span=1h | stats count as CountByHour by _time, kubernetes.container_name | eventstats avg(CountByHour) as AvgByKCN stdev(CountByHour) as StDevByKCN by kubernetes.container_name  
Using the API, I cannot tell the difference between reports and alerts. How do I distinguish? A parameter in my request? A property returned in the response? https://mysplunkserver.local:8089/servic... See more...
Using the API, I cannot tell the difference between reports and alerts. How do I distinguish? A parameter in my request? A property returned in the response? https://mysplunkserver.local:8089/servicesNS/-/-/saved/searches?count=0
Not sure why I get stuck with a "Loading" screen.  Latest version of Splunk. What am I missing?