All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Folks,   I am having a bit of trouble finishing an update.  I have this message in the update: ¿Where is the migration log? mongod.log? I do not see anything to work with.   No... See more...
Hello Folks,   I am having a bit of trouble finishing an update.  I have this message in the update: ¿Where is the migration log? mongod.log? I do not see anything to work with.   Now kvstored is disabled and I can not manually update to wiredTiger. Thanks in advance.
Hey Splunk Team,  I was integrating splunk with linux machine after entering curl installer script in terminal, im getting error for SSL certificate   Sincerely,
Hi Splunkers, I’m working on a pie chart where I have to put two different field of results in the graph.  For example.  I have a column called Risk where I’m doing  stats count by risk and p... See more...
Hi Splunkers, I’m working on a pie chart where I have to put two different field of results in the graph.  For example.  I have a column called Risk where I’m doing  stats count by risk and putting the values in Pi chart. I also want to add another set of results from a different search. like stats count by SLA,  in the pi chart. How can I append both results into the pi chart. 
Im looking to drop EventID 4673 where the action=failure Here is an example log   3/15/2023 02:51:42 PM LogName=Security EventCode=4673 EventType=0 ComputerName=redacted SourceName=Microsoft Windo... See more...
Im looking to drop EventID 4673 where the action=failure Here is an example log   3/15/2023 02:51:42 PM LogName=Security EventCode=4673 EventType=0 ComputerName=redacted SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=redacted Keywords=Audit Failure TaskCategory=Sensitive Privilege Use OpCode=Info Message=A privileged service was called. Subject: Security ID: redacted  Account Name: redacted Account Domain: redacted Logon ID: redacted Service: Server: Security Service Name: Process: Process ID: xxxxx Process Name: C:\Windows\System32\backgroundTaskHost.exe Service Request Information: Privileges: SeTcbPrivilege   From reading      https://docs.splunk.com/Documentation/Splunk/8.2.6/Admin/Inputsconf?_ga=2.40401506.1999669205.1678852413-817152181.1624861549&_gl=1*s1kmhp*_ga*ODE3MTUyMTgxLjE2MjQ4NjE1NDk.*_ga_5EPM2P39FV*MTY3ODg2MDY5OS44Ni4xLjE2Nzg4NjA3NjAuNjAuMC4w#Event_Log_allow_list_and_deny_list_formats     I can see that action is not a valid field to filter on?  # Valid keys for the key=regex format: * The following keys are equivalent to the fields that appear in the text of the acquired events: * Category, CategoryString, ComputerName, EventCode, EventType, Keywords, LogName, Message, OpCode, RecordNumber, Sid, SidType, SourceName, TaskCategory, Type, User So i chose to use Keywords which has the value Audit Failure Here is my inputs.conf   --------------------- [WinEventLog://Security] disabled = 0 index=corp_oswinsec current_only=1 evt_resolve_ad_obj=0 checkpointInterval = 5 blacklist1 = EventCode="4673" Keywords="Audit Failure" -------------------------------- I am still seeing these events being indexed however - any tips on where i am going wrong would be much appreciated!    
hi All, Trying to get data from microsoft security addon and get data for defender. seems like even after giveing necessary permissions on threat api in azure still not getting the data. Any he... See more...
hi All, Trying to get data from microsoft security addon and get data for defender. seems like even after giveing necessary permissions on threat api in azure still not getting the data. Any help is appreciated
I am working to merge two searches. The first search outputs one or more account names:     index=x sourcetype=y | table account     The second search (below), for each account name, filt... See more...
I am working to merge two searches. The first search outputs one or more account names:     index=x sourcetype=y | table account     The second search (below), for each account name, filters lookup csv table 'account lookup' on that account name and counts the number of dates in an adjacent column in the lookup table that are within the last seven days.      | inputlookup append=T account_lookup where account=Account_A | where time > relative_time(now(),"-7d") | stats count as "Accounts Updated in Last 7 Days"]     My searches and attempts to apply related information have not yet revealed how I can pass the account names outputted in the first search into the lookup that is in the second search.   Many thanks for any help.  Sven
Hi, I am using tstats to search the Network Datamodel for outbound SMB traffic (port 445) to external IP address ranges. Why are local IP ranges still appearing in my search results? Here is my... See more...
Hi, I am using tstats to search the Network Datamodel for outbound SMB traffic (port 445) to external IP address ranges. Why are local IP ranges still appearing in my search results? Here is my syntax:     | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic where All_Traffic.dest_port="445" AND NOT All_Traffic.dest IN ("10.0.0.0/8","172.16.0.0/16","192.168.0.0/24") earliest=-15m latest=now by _time, All_Traffic.dest, All_Traffic.dest_port,All_Traffic.src, All_Traffic.src_port, All_Traffic.action, All_Traffic.bytes, index, sourcetype     Screenshot: I believe I have filtered them correctly, but hmm...
Hi everyone. I have followed the documentation for setting up TLS for inter-Splunk communication with self-signed certificates. I have a small test environment that has an SH, an Indexer and an U... See more...
Hi everyone. I have followed the documentation for setting up TLS for inter-Splunk communication with self-signed certificates. I have a small test environment that has an SH, an Indexer and an UF.  However, I get the following error:   03-15-2023 01:23:39.475 +0000 ERROR TcpInputProc [2605538 FwdDataReceiverThread] - Error encountered for connection from src=10.0.0.4:45088. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol     I have created the following certificates and keys based on the Splunk documentation:  myCertAuthCertificate.csr myCertAuthCertificate.pem myCertAuthCertificate.srl myCertAuthPrivateKey.key myServerCertificate.csr myServerCertificate.pem myServerPrivateKey.key mySplkCliCert.pem <- this is the concatenated file.   I copy the myCertAuthCertificate.pem and mySplkCliCert.pem files from the SH to the Indexer.   on the SH and Indexer, I edit the Server.conf to have the following:     [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCertAuthCertificate.pem serverCert = /opt/splunk/etc/auth/mycerts/mySplkCliCert.pem sslPassword = *****     What am I doing wrong?
.
We have a notification service that has a series of four services, a web API, a fanout service that converts submitted multiple-recipient, multiple-delivery-method notifications into multiple notific... See more...
We have a notification service that has a series of four services, a web API, a fanout service that converts submitted multiple-recipient, multiple-delivery-method notifications into multiple notifications with just one recipient and one delivery method, and then delivery and retry services. Based on logging to Splunk as each notification is processed by each service (so states of "submitted" "fanned out", "delivered" and "pending retry"). The log events would have an ID associated with the notification, and the state that just completed. I am hoping to identify notifications that are missing states, like "submitted" appears as a logged event, but no others, or "submitted" and "fanned out", appear, but nothing else. Notifications expire, so bonus points if anyone can come up with a way to track "submitted", "fanned out", "pending retry", but stopped getting "pending retry" log events before the notification expired. "delivered" is of course the final state. Another way to think about this is looking for any "submitted" notification ID that does not have at least "fanned out" and "delivered". I'm willing to set aside the complexity of the one-to-many relationship for now, unless someone has idea(s) about that. In other words, if the submitted notification has 3 recipients and 2 delivery methods, that should become 6 notifications. I'd love to be able to track that properly too, and I could log additional data to facilitate it if needed.
Hello All - I need to be able to compare/graph regression test results from two different models.  The search command to create a table for one of the searches is: index="frontEnd" source="regres... See more...
Hello All - I need to be able to compare/graph regression test results from two different models.  The search command to create a table for one of the searches is: index="frontEnd" source="regress_rpt" pipeline="my_pipe" version="23ww10b" dut="*"  (testlist="*") (testName="*") status="*" | table cyclesPerCpuSec wall_hz testPath rpt This returns a table with 6 rows (As there are 6 tests per version). Is there a way to compare the cyclesPerCpuSec of this search to a new search which has a different version? I.e. index="frontEnd" source="regress_rpt" pipeline="my_pipe" version="23ww10a" dut="*"  (testlist="*") (testName="*") status="*" | table cyclesPerCpuSec wall_hz testPath rpt Thanks, Pip  
Is there a method in which a playbook can be configured to add the tag to the artifact and not the whole container. We are running Splunk SOAR 5.0.1 on prem.  The playbook logic works but the only i... See more...
Is there a method in which a playbook can be configured to add the tag to the artifact and not the whole container. We are running Splunk SOAR 5.0.1 on prem.  The playbook logic works but the only issue is the entire container gets tagged. 
Hi Splunkers, I’m working on a report panel in a dashboard where I need to show the difference of two fields in colors. Any one help me in doing it in a right way using xml?  For example: N... See more...
Hi Splunkers, I’m working on a report panel in a dashboard where I need to show the difference of two fields in colors. Any one help me in doing it in a right way using xml?  For example: Nov       Feb 10           5 20           10 30            40 If it’s less in Feb then it should show as green, if not, it should be in red. How can I do this? can we do this using xml instead of Javascript?
After upgrade to 9.x, higher cpu utilization.
In the splunkd.log I can see the following:  "Local KV Store has replication issues. See introspection data and mongod.log for details. Cluster has not been configured on this member. KVStore clus... See more...
In the splunkd.log I can see the following:  "Local KV Store has replication issues. See introspection data and mongod.log for details. Cluster has not been configured on this member. KVStore cluster has not been configured." If I check the kvstore-status it just says the kvstore status is down for this new member.  The normal shcluster-status shows as UP and the kvstore as ready however for this new member.  Not sure what I can do to try and force the kvstore to initialize?  I have tried shutting splunk down on the new member and doing a kvstore-clean and restarting but it still isn't taking. Splunk version 9.0.0   Any thoughts on what else I can try?
I created a summary index with a custom _raw from a tstats search from 03/14/2023 16:30:00 to 03/14/2023 16:35:00: | tstats summariesonly=false count sum(common.sentbyte) AS sentbyte sum(common.rc... See more...
I created a summary index with a custom _raw from a tstats search from 03/14/2023 16:30:00 to 03/14/2023 16:35:00: | tstats summariesonly=false count sum(common.sentbyte) AS sentbyte sum(common.rcvdbyte) AS rcvdbyte FROM datamodel=CTTI_Fortinet_Log WHERE common.subtype=forward BY common.devname common.dstip common.sessionid | rename common.devname as devname common.dstip as dstip common.sessionid as sessionid | addinfo | eval _time = strftime(info_min_time,"%m/%d/%Y %H:%M:%S %z") | eval version=0.44 | eval _raw=_time . ", " . "devname=". devname . ", " . "dstip=" . dstip . ", " . "sessionid=" . sessionid . ", " . "sentbyte=" . coalesce(sentbyte,0) . ", " . "rcvdbyte=" . coalesce(rcvdbyte,0) . ", " . "version=" . version | fields _raw | collect index=superbasket_d_test addtime=f It worked as intended, showing me the correct extracted _time and _raw in the collect query results but then when I search that same index for some reason it adds 1 second to _time.   Why does it happen?
Hi,  I have onboarded data via DBConnect through Rising Column for which we have configured the Risinig Column value as RS_LAST_MAINTENANCE_TIMESTAMP which is the default Time field. But in the dash... See more...
Hi,  I have onboarded data via DBConnect through Rising Column for which we have configured the Risinig Column value as RS_LAST_MAINTENANCE_TIMESTAMP which is the default Time field. But in the dashboard we have filtering the month wise apps count based on APPLICATION_CRT_DT which has no timestamp. Issue is if we search data for last 7 days, Jan month data is also populating as that particular app is created on Jan month and updating values in last 7 days. so, written "where" condition like below which is not working in all cases(working only when searching since "date",applying epoc time for the below where condition and getting accurate results, but when searching for last 7 days or 24 hrs or all time, that parameter is passing as -7d@d and getting error as invalid). Kindly help on this <input type="time" token="datefield" <default> <earliest>0</earliest> <latest>now</latest> <row> <table> <search> <query>index=* source=tablename |eval Total_Apps=if(match('Type',"NTB"),"1","0") |eval Date=strptime(APPLICATION_CRT_DT,"%Y-%m-%d %H:%M:%S") |where Date&gt;=$datefield.earliest$ OR Date&tl;=$datefield.latest$ |eval Mon-Year=strftime(strptime(APPLICATION_CRT_DT,"%Y-%m-%d %H:%M:%S"),%b-%Y) |stats sum(Total_Apps) as "Total Apps" by Mon-Year
Hi, Here is my Data in 2 logs having 3 fields Log1 :  AccountName books bought bookName ABC 4 book1, book2, book3, book1 DEF 3 book1, book2, book2 MNO 1 ... See more...
Hi, Here is my Data in 2 logs having 3 fields Log1 :  AccountName books bought bookName ABC 4 book1, book2, book3, book1 DEF 3 book1, book2, book2 MNO 1 book3   Log 2 : AccountName books sold bookName ABC 1 book3 DEF 2 book2, book2 MNO 1 book3     Result I want : AccountName Total Books bookName bought sold ABC 4 book1 book2 book3 2 1 1 0 0 1 DEF 3 book1 book2 1 2 0 2 MNO 1 book3 1 1             Can anyone please help me in this. 
I have a lookup file of HostNames HostName Host1 Host2 Host3 Host4 Host5   I would like to create a search to include events that are only from these h... See more...
I have a lookup file of HostNames HostName Host1 Host2 Host3 Host4 Host5   I would like to create a search to include events that are only from these hostnames listed in my lookup file.  How do I do this.? Which "host" field matches the "Hostname" field in my lookup file. An example would be, I am looking for which of these host that are sending windows security logs or not. I know all these systems should be, but some are not, and I want to know which ones are and which one are not using the lookup file.
Hi everyone I got the following sample search that yields the table below. index=server | stats avg(response_time) by server_name | sort + avg(response_time) | streamstats count as rank | hea... See more...
Hi everyone I got the following sample search that yields the table below. index=server | stats avg(response_time) by server_name | sort + avg(response_time) | streamstats count as rank | head 3 rank server_name avg(response_time) new_performance_metric 1 best.server 300   2 second.best.server 350   3 third.best.server 400     Once I know the top servers, I want to calculate additional new_performance_metric for each of the three servers. Does anyone know how this can be done? Note: - I can't use foreach since the metric I want to calculate involves streaming commands. Foreach does not support that. - I think I can't use a subsearch since it is executed first where the top servers are not known yet. - I can't precompute the new_performance_metric for all servers and then use something like a lookup since this is computationally too expensive. My guess is that the solution involves a macro but I couldn't figure it out yet. Many thanks in advance.