All Topics

Top

All Topics

Hello!  One of our customer has a problem with this executable "C:\Program Files\SplunkUniversalForwarder_script\files\blat\blat.exe" that tries to launch this command "C:\Program Files\SplunkUni... See more...
Hello!  One of our customer has a problem with this executable "C:\Program Files\SplunkUniversalForwarder_script\files\blat\blat.exe" that tries to launch this command "C:\Program Files\SplunkUniversalForwarder_script\files\blat\blat.exe" -install mailrelay2.domain.com hostname@domain.com  Can you help me to understand if this process is generated from Splunk or if it is a custom process? Thank you, Mauro
My App is failing to upgrade readiness for python3. I am getting an error in the cim_actions.py file. I have upgraded the readiness app to a new version. I even tried creating a new app from the scr... See more...
My App is failing to upgrade readiness for python3. I am getting an error in the cim_actions.py file. I have upgraded the readiness app to a new version. I even tried creating a new app from the scratch even if that's failing the readiness check.    
I am needing to view the cost per application in Splunk to compare to different products. For instance, I am needing to see how much it is costing us to ingest information for the M365 application fo... See more...
I am needing to view the cost per application in Splunk to compare to different products. For instance, I am needing to see how much it is costing us to ingest information for the M365 application for Splunk. Where can I do this, and what level of permissions do I need? Also, how do I delete an application from Splunk Cloud so that we are no longer billed for the data?   Thank you
Hello, I'm building a report to list all phishing and malware threat detections by sender, classification, and threat url. The data contains two types of events "clicksAllowed" and "clicksBlocked".... See more...
Hello, I'm building a report to list all phishing and malware threat detections by sender, classification, and threat url. The data contains two types of events "clicksAllowed" and "clicksBlocked". I want to add a list of recipients if their click was allowed "clicksAllowed" and I'm struggling with how to structure my query. I'm currently trying to do this with stats and eval (I thought about using subsearch too maybe), hopefully, I'm on the right track but I can't figure out how to show only the recipients who clicked while still showing counts of how many clicks were allowed and blocked. Current search (without who clicked): index=tap sourcetype="pp_tap_siem" classification IN (phish, malware) threatStatus=active | eval time=strftime(_time,"%m/%d/%y @ %H:%M:%S") | stats earliest(time) AS First_Seen, latest(time) AS Last_Seen count(eval(eventType="clicksPermitted")) AS Clicks_Permitted, count(eval(eventType="clicksBlocked")) AS Clicks_Blocked, values(threatURL) AS TAP_Link BY sender, classification, url | table First_Seen, Last_Seen, classification, sender, Clicks_Permitted, Clicks_Blocked, AT_Risk_Users, url, TAP_Link | sort -Last_Seen Output looks like: First_Seen Last_Seen classification sender Clicks_Permitted Clicks_Blocked  AT_Risk_Users url TAP_Link 03/14/23 @ 17:52:36 03/14/23 @ 17:52:36 phish badguy@domain.com 1 1 list of 1 person here hxxp://baddomain.com hxxp://link_tothreatintel_webportal.com/uniqueguid 01/05/23 @ 12:34:44 01/05/23 @ 17:44:41 phish badguy2@domain.com 39 3 list of 39 people here hxxp://baddomain2.com hxxp://link_tothreatintel_webportal.com/uniqueguid 01/18/23 @ 15:43:20 02/16/23 @ 22:46:19 malware badguy3@domain.com 4 0 list of 4 people here hxxp://baddomain.com hxxp://link_tothreatintel_webportal.com/uniqueguid
I want to blacklist "filtered__results.json" file and allow to ingest by Splunk anything  "filtered__results.json265964694"    How do I do ? what is correct Rex for blacklisting "filtered__results... See more...
I want to blacklist "filtered__results.json" file and allow to ingest by Splunk anything  "filtered__results.json265964694"    How do I do ? what is correct Rex for blacklisting "filtered__results.json"
Hi, While trying to configure the rapid7intsightsvm app the data is not indexing to index which  I have configured. Name InsightVM_Assets Interval 3600 Full import schedule (Days) 0 Index ... See more...
Hi, While trying to configure the rapid7intsightsvm app the data is not indexing to index which  I have configured. Name InsightVM_Assets Interval 3600 Full import schedule (Days) 0 Index test Status false InsightVM Connection Splunk_Rapid7 Asset Filter Site IN [Rapid7] Import vulnerabilities 1 Include same vulnerabilities 0 what changes we need to get the data in to  test index ??
I'm new to Splunk so I apologize if this is very obvious, but I haven't seen anything that seems like it fits my needs exactly in the community. I'm trying to build a dashboard that will display temp... See more...
I'm new to Splunk so I apologize if this is very obvious, but I haven't seen anything that seems like it fits my needs exactly in the community. I'm trying to build a dashboard that will display temperature values from sensors based on messages received in a stream.  The messages come in with a time, a sensor id/name, and a temperature.  For any given period of time I wont know how many sensors I will receive temperatures from.  Currently my query is based on a table that splits the sensors into columns and then adds the values based on time:    This kind of works for me - except I need my dashboard to look like this:    The line chart is probably good enough, because I can set the nullvaluemode to connect, which covers the gaps in data. But the Singles and Sparklines at the top are not very useful. Basically I'm looking for any suggestions on how I can improve the query to make that top section work better. I've tried to keep track of a "lastKnownTemp" using last() to use to fill in the null values, but I don't know how to do it for an unknown number of sensors. Ideally I think this would be the way I would want to go if someone knew of a way to accomplish this? I've considered using transactions to split the messages by sensor id, but then when I get a single event that has a bunch of events inside, I don't really know what to do with them.  Any suggestions or information would be greatly appreciated. 
I have the following data in a Cell that reads  1.01.01 Example App AL11111 Is there a way I can split the data into 3 separate columns, there are no delimiters, I thought using space but I have ... See more...
I have the following data in a Cell that reads  1.01.01 Example App AL11111 Is there a way I can split the data into 3 separate columns, there are no delimiters, I thought using space but I have entries that do have spaces in the middle section. e.g.  1.1.1.10 Example App AL11111   One thing to note, the initial numbers will always be 8 characters long and the AL***** will always be 7 characters Thanks
Indicator "ingestion_latency_gap_multiplier" exceeded configured value. The observed value is 98344.   Is this normal? We have Splunk Universal Forwarder installed on all systems and forwarding E... See more...
Indicator "ingestion_latency_gap_multiplier" exceeded configured value. The observed value is 98344.   Is this normal? We have Splunk Universal Forwarder installed on all systems and forwarding Event logs. Is there any way to improve ingestion latency?
We did a linux patching cycle about a month ago. We have a 10 indexer 2 site cluster with 3:3 search and replication. I put the cluster into maintenance mode, stop splunk on an indexer, patch, reboot... See more...
We did a linux patching cycle about a month ago. We have a 10 indexer 2 site cluster with 3:3 search and replication. I put the cluster into maintenance mode, stop splunk on an indexer, patch, reboot, wait until the indexer is up on the cluster manager and then repeat the cycle for the remainder of the indexers. Usually after the patching is done the bucket fixup tasks are a small amount and rapidly resolve.  This past patching cycle we had a over 10k that are slowly resolving (maybe 30 a day). If I resync a bucket that number immediately drops by the amount I resync. I can only do 20 at a time because the cluster manager only allows 20 per page. That approach is silly with having to do that 10k times (currently sitting at 7k).  I saw a community post about doing a rolling restart of the cluster will resolve this issue but it didn't. I did notice that there is 18 indexes (out of 126) that have access buckets. Wasn't sure if that affects anything. Is there a way to resync buckets more easily? Maybe 100 at a time without having to click through prompts?
Hello. So I'm trying to create a report which will send a daily email. I'm using the action "Send Email" to send the report. In there I have two options set: - Inline Table - Attach CSV   My qu... See more...
Hello. So I'm trying to create a report which will send a daily email. I'm using the action "Send Email" to send the report. In there I have two options set: - Inline Table - Attach CSV   My question is. Can I for example have the "Inline Table" limited to lets say 10 top results? The thing what I want to achieve here, is to have a short summary in the e-mail body (the 10 top results) and the full search result in the CSV file (which can have hundreds of rows) Is this even possible in this one action?
Hi! im working on an alert for access from different countries for certain users in a short time period. The alert and the search works fine but i will like to show more info when the alert triggers ... See more...
Hi! im working on an alert for access from different countries for certain users in a short time period. The alert and the search works fine but i will like to show more info when the alert triggers (source ip and time).   Here a sample of the event: 09:09:55,377 INFO [XX.XXX.XXXXXXX.cbapi.dao.login.LoginDAOImpl] (default task-34878) Enviamos parámetros: [authTipoPassword=E, authDato=4249929, authTipoDato=D, nroDocEmpresa=80097256-2, tipoDocEmpresa=D, authCodCanal=999, authIP=45.170.128.191, esDealer=N, dispositivoID=40ee57e1-e5eb-4b14-b7ef-9f0f8ccdf6c 2, dispositivoOS=null ] Here the search: index="XXXX" host="XXX.XXX.-*" sourcetype=XXXXXXCBAPI*  authDato authIP dao.login.LoginDAOImpl authIP=* authCodCanal=999 | iplocation authIP | eval Country = if(isnull(Country) OR Country="", "Unknown", Country) | stats dc(Country) AS count values(Country) AS country values(authIP) as authIP latest(_time) AS latest BY authDato | where count > 1 | eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | sort - latest With this i get a result like this: authdato | count | Country | authIP | latest 2363494 | 2 |   Argentina | 170.51.250.39 | 2023-03-15 09:09:09                               Paraguay | 170.51.55.186 the thing is.. the ip address aren't aligned with the country for that ip, neither the time is aligned with the last Country or ip address. Ive tried several things but still can't figure out how to correctly present the results (in the right order i mean)    
I am looking for the search query which can give me a result of any docker container connections to unusal ports.  Tired the below query  index=aws_eks_* responseObject.spec.limits{}.type=*contain... See more...
I am looking for the search query which can give me a result of any docker container connections to unusal ports.  Tired the below query  index=aws_eks_* responseObject.spec.limits{}.type=*container* | NOT search port IN (80,443,8080,8443,3000,330)=80 OR port=443 OR port=8080 OR port=8443 OR port=3000 OR port=3306)
The above snippet consists of the raw data in the events in our splunk environment. Need Help in extracting the jobIds (that are highlighted) in the raw data and add them as a separate field l... See more...
The above snippet consists of the raw data in the events in our splunk environment. Need Help in extracting the jobIds (that are highlighted) in the raw data and add them as a separate field like below using SPL in user interface.  
Hello Folks,   I am having a bit of trouble finishing an update.  I have this message in the update: ¿Where is the migration log? mongod.log? I do not see anything to work with.   No... See more...
Hello Folks,   I am having a bit of trouble finishing an update.  I have this message in the update: ¿Where is the migration log? mongod.log? I do not see anything to work with.   Now kvstored is disabled and I can not manually update to wiredTiger. Thanks in advance.
Hey Splunk Team,  I was integrating splunk with linux machine after entering curl installer script in terminal, im getting error for SSL certificate   Sincerely,
Hi Splunkers, I’m working on a pie chart where I have to put two different field of results in the graph.  For example.  I have a column called Risk where I’m doing  stats count by risk and p... See more...
Hi Splunkers, I’m working on a pie chart where I have to put two different field of results in the graph.  For example.  I have a column called Risk where I’m doing  stats count by risk and putting the values in Pi chart. I also want to add another set of results from a different search. like stats count by SLA,  in the pi chart. How can I append both results into the pi chart. 
Im looking to drop EventID 4673 where the action=failure Here is an example log   3/15/2023 02:51:42 PM LogName=Security EventCode=4673 EventType=0 ComputerName=redacted SourceName=Microsoft Windo... See more...
Im looking to drop EventID 4673 where the action=failure Here is an example log   3/15/2023 02:51:42 PM LogName=Security EventCode=4673 EventType=0 ComputerName=redacted SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=redacted Keywords=Audit Failure TaskCategory=Sensitive Privilege Use OpCode=Info Message=A privileged service was called. Subject: Security ID: redacted  Account Name: redacted Account Domain: redacted Logon ID: redacted Service: Server: Security Service Name: Process: Process ID: xxxxx Process Name: C:\Windows\System32\backgroundTaskHost.exe Service Request Information: Privileges: SeTcbPrivilege   From reading      https://docs.splunk.com/Documentation/Splunk/8.2.6/Admin/Inputsconf?_ga=2.40401506.1999669205.1678852413-817152181.1624861549&_gl=1*s1kmhp*_ga*ODE3MTUyMTgxLjE2MjQ4NjE1NDk.*_ga_5EPM2P39FV*MTY3ODg2MDY5OS44Ni4xLjE2Nzg4NjA3NjAuNjAuMC4w#Event_Log_allow_list_and_deny_list_formats     I can see that action is not a valid field to filter on?  # Valid keys for the key=regex format: * The following keys are equivalent to the fields that appear in the text of the acquired events: * Category, CategoryString, ComputerName, EventCode, EventType, Keywords, LogName, Message, OpCode, RecordNumber, Sid, SidType, SourceName, TaskCategory, Type, User So i chose to use Keywords which has the value Audit Failure Here is my inputs.conf   --------------------- [WinEventLog://Security] disabled = 0 index=corp_oswinsec current_only=1 evt_resolve_ad_obj=0 checkpointInterval = 5 blacklist1 = EventCode="4673" Keywords="Audit Failure" -------------------------------- I am still seeing these events being indexed however - any tips on where i am going wrong would be much appreciated!    
hi All, Trying to get data from microsoft security addon and get data for defender. seems like even after giveing necessary permissions on threat api in azure still not getting the data. Any he... See more...
hi All, Trying to get data from microsoft security addon and get data for defender. seems like even after giveing necessary permissions on threat api in azure still not getting the data. Any help is appreciated
I am working to merge two searches. The first search outputs one or more account names:     index=x sourcetype=y | table account     The second search (below), for each account name, filt... See more...
I am working to merge two searches. The first search outputs one or more account names:     index=x sourcetype=y | table account     The second search (below), for each account name, filters lookup csv table 'account lookup' on that account name and counts the number of dates in an adjacent column in the lookup table that are within the last seven days.      | inputlookup append=T account_lookup where account=Account_A | where time > relative_time(now(),"-7d") | stats count as "Accounts Updated in Last 7 Days"]     My searches and attempts to apply related information have not yet revealed how I can pass the account names outputted in the first search into the lookup that is in the second search.   Many thanks for any help.  Sven