All Topics

Top

All Topics

does setting the following configuration in itsi_notable_event_retention.conf will send the events if limit is reached before the specified time period. For example if the event object count exceed 5... See more...
does setting the following configuration in itsi_notable_event_retention.conf will send the events if limit is reached before the specified time period. For example if the event object count exceed 500000 before the retentiontime period will the retention object go to archive?   [itsi_notable_group_user] # Default is one year retentionTimeInSec = 31536000 retentionObjectCount = 500000 disabled = 0 object_type = notable_event_group  
Hello, I have a question related to Graphs in Transaction Snapshots. Why do some snapshots have Full (blue icon) / Partial (gray icon) / Error / No graph? From where do those transactions gather... See more...
Hello, I have a question related to Graphs in Transaction Snapshots. Why do some snapshots have Full (blue icon) / Partial (gray icon) / Error / No graph? From where do those transactions gather data and yea - why do some of those transactions have full or partial graphs? Thanks! Regards, DW
Hi Team,   I want support to know why I am not able to see lookup for my created Threat Intelligence Management Source under Splunk Enterprise Security pulled from Github. I am trying to get ma... See more...
Hi Team,   I want support to know why I am not able to see lookup for my created Threat Intelligence Management Source under Splunk Enterprise Security pulled from Github. I am trying to get mac and its vendor details as intelligence after using the feature of "Threat Intelligence Management"   My configurations are below:   1. Creation of source under Threat Intelligence Manager with "Line Oriented" selection. 2. Input name mac_vendor with description as mac_vendor, type also mac_vendor with Github URL details:  3. Unchecked "Threat Intelligence" Box. 4. File Parser Auto 5. Delimiting regular expression setting as : , 6. Ignoring regular expression setting as : (^#|^\s*$) 7. field section: mac:$1,vendor:$2 8. skip header lines : 0 with rest configured as default only. Sample Event showing successful file download: INFO pid=28775 tid=MainThread file=threatlist.py:download_threatlist_file:549 | stanza="mac_ioc" retries_remaining="3" status="threat list downloaded" file="/opt/splunk/var/lib/splunk/modinputs/threatlist/mac_ioc" bytes="678565" url="https://gist.githubusercontent.com/aallan/b4bb86db86079509e6159810ae9bd3e4/raw/846ae1b646ab0f4d646af9115e47365f4118e5f6/mac-vendor.txt" What I am missing to see this information in Splunk S.A Intelligence?
We are periodically seeing spikes of Storage I/O Saturation (Monitoring Console > Resource Usage: Deployment).  When split by host we can see that this is affecting all 6 indexers nearly simultaneous... See more...
We are periodically seeing spikes of Storage I/O Saturation (Monitoring Console > Resource Usage: Deployment).  When split by host we can see that this is affecting all 6 indexers nearly simultaneously for the /opt/splunkdata mount points.  As expected, this triggers the Health Status notification throughout the day (warning or alert). To note, Load Averages are regularly > 5% with CPU usage normally under 10% for each indexer (24 cores each).  RAM usage around 30% per indexer.  We are wondering if our physical storage and/or network might be a bottleneck or if it's something on the Splunk side. For a Splunk Admin beginner, could someone please offer some suggestions on where we could start troubleshooting these spikes or explain in more detail the specifics around Storage I/O Saturation? We are on Enterprise 9.0.4 across the board and considering the recent update sooner than later. Thank you!
I have a field named "port_number"  in my results which gives multivalves as follows. source  destination port_number 3.4.5.6 22.34.56.78 1234 12.23.43.54 13.45.65.7... See more...
I have a field named "port_number"  in my results which gives multivalves as follows. source  destination port_number 3.4.5.6 22.34.56.78 1234 12.23.43.54 13.45.65.76 1234 3456 4567 8764 2345 2345 2349 12.32.43.54 65.43.21.12 7899 6788 4566 2344   Whereas query is as follows        Index= ABC | stats values(port_number) as port_number by source, destination        Now how can I make the result look like as follows  Expected Outcome :-  source  destination port_number 3.4.5.6 22.34.56.78 1234 12.23.43.54 13.45.65.76 1234 3456 Check logs for more port numbers 12.32.43.54 65.43.21.12 7899 6788 check logs for more port numbers   As you can see in the above result all I am trying to do is if there are more than 2 values in a field then I would like to add a text instead of displaying all the numbers as some results have more than 100 ports. 
I am having a below query which is providing the TPS average variance output for complete 30 days. Can you please help guide me with the logic on how to modify this query for MaxTPS variance? R... See more...
I am having a below query which is providing the TPS average variance output for complete 30 days. Can you please help guide me with the logic on how to modify this query for MaxTPS variance? Requirement is to calculate MaxTPS variance (instead of the below logic for Average TPS variance) Modification to be added: index=<search string> earliest=-30d@d date_hour>=$timefrom$ AND date_hour<$timeto$ | timechart span=$TotalMinutes $m count(eval(searchmatch("sent"))) as HotCountToday | eval TPS=round(HotCountToday/($TotalMinutes $*60),2) | eval TotalMinutes = ($timeto$ - $timefrom$) * 60 | eval Day=strftime(_time, "%Y-%m-%d") | stats max(TPS) as MaxTPS by Day Original Query: index=<search_strings> earliest=-30d@d date_hour>=$timefrom$ AND date_hour<$timeto$ | eval Date = strftime(_time, "%Y-%m-%d") | stats count(eval(Date=strftime(now(), "%Y-%m-%d"))) as HotCountToday, count(eval(Date=strftime(relative_time(now(), "-1d@d"), "%Y-%m-%d"))) as HotCountBefore1Day, count(eval(Date=strftime(relative_time(now(), "-2d@d"), "%Y-%m-%d"))) as HotCountBefore2Day, count(eval(Date=strftime(relative_time(now(), "-3d@d"), "%Y-%m-%d"))) as HotCountBefore3Day, count(eval(Date=strftime(relative_time(now(), "-4d@d"), "%Y-%m-%d"))) as HotCountBefore4Day, count(eval(Date=strftime(relative_time(now(), "-5d@d"), "%Y-%m-%d"))) as HotCountBefore5Day, count(eval(Date=strftime(relative_time(now(), "-6d@d"), "%Y-%m-%d"))) as HotCountBefore6Day, count(eval(Date=strftime(relative_time(now(), "-7d@d"), "%Y-%m-%d"))) as HotCountBefore7Day, . . count(eval(Date=strftime(relative_time(now(), "-30d@d"), "%Y-%m-%d"))) as HotCountBefore30Day by TestMQ | eval Today = strftime(now(), "%Y-%m-%d") | eval Before1Day = strftime(relative_time(now(), "-1d@d"), "%Y-%m-%d") | eval Before2Day = strftime(relative_time(now(), "-2d@d"), "%Y-%m-%d") | eval Before3Day = strftime(relative_time(now(), "-3d@d"), "%Y-%m-%d") | eval Before4Day = strftime(relative_time(now(), "-4d@d"), "%Y-%m-%d") | eval Before5Day = strftime(relative_time(now(), "-5d@d"), "%Y-%m-%d") | eval Before6Day = strftime(relative_time(now(), "-6d@d"), "%Y-%m-%d") | eval Before7Day = strftime(relative_time(now(), "-7d@d"), "%Y-%m-%d") . . | eval Before23Day = strftime(relative_time(now(), "-23d@d"), "%Y-%m-%d") | eval TotalMinutes = ($timeto$ - $timefrom$) * 60 | eval TPS_Today=round(HotCountToday/(TotalMinutes*60),3) | eval TPS_Before1Day=round(HotCountBefore1Day/(TotalMinutes*60),3) | eval TPS_Before2Day=round(HotCountBefore2Day/(TotalMinutes*60),3) | eval TPS_Before3Day=round(HotCountBefore3Day/(TotalMinutes*60),3) | eval TPS_Before4Day=round(HotCountBefore4Day/(TotalMinutes*60),3) | eval TPS_Before5Day=round(HotCountBefore5Day/(TotalMinutes*60),3) | eval TPS_Before6Day=round(HotCountBefore6Day/(TotalMinutes*60),3) | eval TPS_Before7Day=round(HotCountBefore7Day/(TotalMinutes*60),3) . . | eval TPS_Before30Day=round(HotCountBefore30Day/(TotalMinutes*60),3) | eval Variance_TPS_Today = case(TPS_Before7Day > TPS_Today, round(((TPS_Before7Day - TPS_Today) / TPS_Before7Day) * 100,3), TPS_Before7Day < TPS_Today, round(((TPS_Today - TPS_Before7Day) / TPS_Today) * 100,3), TPS_Before7Day = TPS_Today, round(((TPS_Before7Day - TPS_Today)) * 100,3)) | eval Variance_TPS_Before1Day = case(TPS_Before8Day > TPS_Before1Day, round(((TPS_Before8Day - TPS_Before1Day) / TPS_Before8Day) * 100,3), TPS_Before8Day < TPS_Before1Day, round(((TPS_Before1Day - TPS_Before8Day) / TPS_Before1Day) * 100,3), TPS_Before8Day = TPS_Before1Day, round(((TPS_Before8Day - TPS_Before1Day)) * 100,3)) | eval Variance_TPS_Before2Day = case(TPS_Before9Day > TPS_Before2Day, round(((TPS_Before9Day - TPS_Before2Day) / TPS_Before9Day) * 100,3), TPS_Before9Day < TPS_Before2Day, round(((TPS_Before2Day - TPS_Before9Day) / TPS_Before2Day) * 100,3), TPS_Before9Day = TPS_Before2Day, round(((TPS_Before9Day - TPS_Before2Day)) * 100,3)) . . . | eval Variance_TPS_Before23Day = case(TPS_Before30Day > TPS_Before23Day, round(((TPS_Before30Day - TPS_Before23Day) / TPS_Before30Day) * 100,3), TPS_Before30Day < TPS_Before23Day, round(((TPS_Before23Day - TPS_Before30Day) / TPS_Before23Day) * 100,3), TPS_Before30Day = TPS_Before23Day, round(((TPS_Before30Day - TPS_Before23Day)) * 100,3)) | eval {Today}=Variance_TPS_Today | fields - Today Variance_TPS_Today | eval {Before1Day}=Variance_TPS_Before1Day | fields - Before1Day Variance_TPS_Before1Day | eval {Before2Day}=Variance_TPS_Before2Day | fields - Before2Day Variance_TPS_Before2Day | eval {Before3Day}=Variance_TPS_Before3Day | fields - Before3Day Variance_TPS_Before3Day | eval {Before4Day}=Variance_TPS_Before4Day | fields - Before4Day Variance_TPS_Before4Day | eval {Before5Day}=Variance_TPS_Before5Day | fields - Before5Day Variance_TPS_Before5Day | eval {Before6Day}=Variance_TPS_Before6Day | fields - Before6Day Variance_TPS_Before6Day | eval {Before7Day}=Variance_TPS_Before7Day | fields - Before7Day Variance_TPS_Before7Day . . . | eval {Before23Day}=Variance_TPS_Before23Day | fields - Before23Day Variance_TPS_Before23Day | table TestMQ 2* Query Output as below: TestMQ 2023-06-23 2023-06-22 2023-06-21 2023-06-20 2023-06-19 2023-06-18 2023-06-17 2023-06-16 And so on - till 30 days MQ.NAME 5.003 17.004 25.775 19.882 32.114 56.881 10.991 85.114 .... I am new to Splunk and still learning. Looking forward to hear from you. Kindly suggest how this can be achieved. @ITWhisperer @bowesmana @xpac @MuS @yuanliu - looking forward to hear from you, please help assist.
As a Splunk Admin, you have the critical role of getting data into Splunk for the rest of your org. The process can seem daunting (we understand), so our customer success team has drummed up some ste... See more...
As a Splunk Admin, you have the critical role of getting data into Splunk for the rest of your org. The process can seem daunting (we understand), so our customer success team has drummed up some step-by-step resources to help you get started. Cloud Native Data Easily ingest your AWS, Azure, or GCP data with the new Data Manager, native to Splunk Cloud Platform literally in minutes. Follow along with a step-by-step tutorial video. We understand that getting data into Splunk Cloud can be cumbersome and repetitive. This is why we are excited to introduce Data Manager - Now in preview! Data Manager is a new modernized, simplified, and automated experience for onboarding cloud-native data sources such as AWS. Watch the video Looking for additional guidance? Refer to the Data Manager documentation for more info! Linux or Windows Data The Universal Forwarder is a common way to easily and securely send lossless data to Splunk. First, you'll want to download the Universal Forwarder to get started. Then, review our documentation on how to forward data to Splunk Cloud Platform. Pro Tips: Use Ingest actions to filter, mask, and route data at ingest and at the edge, using only simple clicks (no writing command lines). Use the Splunk Success Framework (SSF) to understand getting your data in needs before you start adding inputs to your deployment. You can then use the Splunk platform tools to configure many kinds of data inputs, including those that are specific to particular application needs. If you have any questions regarding data onboarding, we definitely recommend checking out the Getting Data In section in our community. And be sure to join us during Splunk Community Office Hours if you are interested in getting live, hands-on help from Splunk experts. For additional assistance with your Splunk deployment, explore the available help options for Splunk Cloud Platform or contact your Account Manager. We hope you found this helpful -- we'll be back with more guides, tips, and tricks in the future, so stay tuned and thanks for choosing Splunk!
Hello All, I need help to make build an SPL which helps to get the results of Job inspector for each query executed by end users. Example of fields: - command.search.rawdata command.search.kv co... See more...
Hello All, I need help to make build an SPL which helps to get the results of Job inspector for each query executed by end users. Example of fields: - command.search.rawdata command.search.kv command.search.index Any information or guidance will be very helpful. Thank you Taruchit
Hi, I'm trying to use index and lookup function. However values in those fields are not an exact match but those email address belongs to one person. How can i get the non exact match to work?   eg... See more...
Hi, I'm trying to use index and lookup function. However values in those fields are not an exact match but those email address belongs to one person. How can i get the non exact match to work?   eg. from index= user: email_address, team john.doe@xyz.com, blue   from file.csv: email_address, department john.doe@xyz.com.au, HR example search: "index=user | lookup "file.csv" | table email_address department 
I have these events in the logs: So far have only set up a username and pwd from the settings > all settings > credentials > manage api poller credentials but getting a 403 error - so missing som... See more...
I have these events in the logs: So far have only set up a username and pwd from the settings > all settings > credentials > manage api poller credentials but getting a 403 error - so missing something - is it my account setup?   2023-06-23 07:08:53,939 +0000 log_level=ERROR, pid=340421, tid=Thread-4, file=engine.py, func_name=_send_request, code_line_no=318 | [stanza_name="test_alerts"] The response status=403 for request which url=https://x.x.x.x:17778/SolarWinds/InformationService/v3/Json/Query?query=SELECT+EventID,EventTime,NetworkNode,NetObjectID,NetObjectValue,EngineID,EventType,Message,Acknowledged,NetObjectType,TimeStamp,DisplayName,Description,InstanceType,Uri,InstanceSiteId+FROM+Orion.Events+WHERE+EventTime>'2023-06-23 00:00:00.00' and method=GET.  
hello. myphantom.com is closed. Now how i can download iso image for vm or how i will reach community version for google cloud? Can someone help please.
Hi, Can we see queries run by another splunk user for any app  ? Does it require any extra priviledges / roles ? Please let me know. Regards, PNV
Upgrading glibc package version 2.17-326.0.5.el7_9  on Oracle Linux 7 can cause crashes. Please see https://github.com/oracle/oracle-linux/issues/90 for solution. Posting just for visibility.
  I need an API call to run a Splunk report that has already been saved and add the most recent values to the report. I do not wish to wait until the cron time is set.   I attempted to use the "d... See more...
  I need an API call to run a Splunk report that has already been saved and add the most recent values to the report. I do not wish to wait until the cron time is set.   I attempted to use the "dispatch.now" function in this api "saved/searches/name/dispatch". It started a task and executed the search; I could see the results in finished jobs, but my report was not updating with the most recent information.   I also need an API to check the status of the executed query to see if it has finished or is still running. The response from the API call instructs me to look for the parameter isdone=true, however I am unable to depend on the results because the jobs are still running when I manually check their status.  
We are looking at utilizing the "InfoSec App for Splunk" however the last version is from June of 2021 (two years ago).  Has this app been superseded by another or is there a different long term plan... See more...
We are looking at utilizing the "InfoSec App for Splunk" however the last version is from June of 2021 (two years ago).  Has this app been superseded by another or is there a different long term plan for the app?  Just wanting to know if we should continue down this path or another path. Thanks!
Since the release of Splunk SOAR 6.0, the Splunk SOAR team has been hard at work implementing new features and integrations to help improve the SOAR user experience. The version 6.0 release re... See more...
Since the release of Splunk SOAR 6.0, the Splunk SOAR team has been hard at work implementing new features and integrations to help improve the SOAR user experience. The version 6.0 release represents a culmination of efforts to become part of the vision of a truly unified Splunk security experience and provides a single security operations solution with its integration with Mission Control. Key Takeaways: Learn about the latest features in Splunk SOAR Learn how Splunk SOAR integrates with Mission Control Learn about changes to the Automation Broker
The below query is giving the results for 30 days MaxTPS data. (Between the time range of 2:00 to 4:00) index=<search_strings> earliest=-30d@d date_hour>=2 AND date_hour<4 | timechart span=120m cou... See more...
The below query is giving the results for 30 days MaxTPS data. (Between the time range of 2:00 to 4:00) index=<search_strings> earliest=-30d@d date_hour>=2 AND date_hour<4 | timechart span=120m count(eval(searchmatch("sent"))) as HotCountToday | eval TPS=round(HotCountToday/(120*60),2) | eval Day=strftime(_time, "%Y-%m-%d") | stats max(TPS) as MaxTPS by Day Now I want to calculate the "MaxTPS Variance" for complete 30 days. Calculate the percentage MaxTPS variance between "Today's value to last week's value" (and so on) and show the MaxTPS variance percentage. (Example: Monday to last week Monday; Sunday to last week Sunday and so on) I am new to Splunk and still learning. Looking forward to hear from you. Kindly suggest how this can be achieved. @ITWhisperer @bowesmana @xpac 
I am getting the log file imported to Splunk, but each line is an event with no field name.  Can I break up the line into columns?  If not, how do I parse the line to extract a number? Index is: ... See more...
I am getting the log file imported to Splunk, but each line is an event with no field name.  Can I break up the line into columns?  If not, how do I parse the line to extract a number? Index is: index=test_7d sourcetype=kafka:producer:bigfix Events are: 2023-06-22 09:15:44,270 root - INFO - 114510 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:37,204 root - INFO - Executing getDatafromDB 2023-06-22 09:15:35,704 root - INFO - 35205 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:33,286 root - INFO - Executing getDatafromDB 2023-06-22 09:15:32,703 root - INFO - 167996 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:22,479 root - INFO - Executing getDatafromDB 2023-06-22 09:15:19,031 root - INFO - 181 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka Each line/event starts with the date, the wordwrap is making it look incorrect.  I need to parse the bold number of each line after '- INFO -' and add a zero if no number.  I can do this with a eval, but how do I parse if there is no field name to add to the 'regex' command? Thank you for looking at this problem!
In Splunk db connect some specific data labs are not indexing properly to Splunk means not forwarding its data to Splunk  search head from the databases where as those databases are executing fine wh... See more...
In Splunk db connect some specific data labs are not indexing properly to Splunk means not forwarding its data to Splunk  search head from the databases where as those databases are executing fine what could be the issue is it on server or on Splunk?
Trying to find Time Taken for last 7 days for a batch job using splunk search, trying to find the average of the time taken and then finding the jobs that have time taken greater than average time. ... See more...
Trying to find Time Taken for last 7 days for a batch job using splunk search, trying to find the average of the time taken and then finding the jobs that have time taken greater than average time. splunk search | eval sTime=strptime(StartTime, "%B %d, %Y %I:%M:%S %p") | eval eTime=strptime(EndTime, "%B %d, %Y %I:%M:%S %p") | eval TimeTaken = ceil((eTime-sTime)/60) | stats avg(TimeTaken) as avgtime by JobbName | where TimeTaken > avgtime Once I use the stats average command, the TimeTaken values are not coming up . Tried using streamstats but averagetime calculation is not right