All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My team has a Splunk Cloud environment (Splunk Cloud 9.0.2209.3) and we are exploring our readiness for upgrade to Python3. However, when we scan with the Upgrade Readiness App, several public apps a... See more...
My team has a Splunk Cloud environment (Splunk Cloud 9.0.2209.3) and we are exploring our readiness for upgrade to Python3. However, when we scan with the Upgrade Readiness App, several public apps are failing the Python scan. This is concerning because most of the failing apps are Splunk Supported Addons with recent updates. The below Splunkbase apps are failing the Python scan: Splunk App for Salesforce (version 2.0.1) - Not Supported Slack Add-on for Splunk (version 2.0.1) - Not Supported Splunk Add-on for Amazon Web Services (AWS) (version 7.0.0) - Splunk Supported Addon Splunk Add-on for Github (version 2.1.1) - Splunk Supported Addon Splunk Add-on for Salesforce (version 4.7.1) - Splunk Supported Addon Splunk Add-on for ServiceNow (version 7.6.0) - Splunk Supported Addon Splunk Add-on for Salesforce Streaming API (version 1.0.6) - Not Supported Furthermore, when using the Splunk AppInspect CLI tool, each of these add-ons pass all the python-version related checks. Please help us understand if these apps are actually failing the Python Scan, or if we should be safe to request the Python3 upgrade for our Splunk Cloud environment. Thank you!
Good Afternoon, I have a query to get disk space from servers. Each server has between 1 and 3 drives. My query will output a list of all Hosts, Drives, Times and Free Space % but the results are f... See more...
Good Afternoon, I have a query to get disk space from servers. Each server has between 1 and 3 drives. My query will output a list of all Hosts, Drives, Times and Free Space % but the results are for the last minute and show all results for each host in that minute (several each). Is there a way to limit the results to UP TO 3 per host? If I "dedup 3" it creates 3 for the hosts that have 1 or 2 drives. Thank you!     (index=main) sourcetype=perfmon:LogicalDisk instance!=_Total instance!=Harddisk* | eval FreePct-Other=case( match (instance, "C:"), null(), match(instance,"D:"), null(),true(),storage_free_percent), FreeMB-Other=case( match (instance, "C:"), null(), match(instance,"D:"), null(), true(),Free_Megabytes), FreePct-{instance}=storage_free_percent,FreeMB-{instance}=Free_Megabytes| search counter="% Free Space" | eval Time=strftime (_time,"%Y-%m-%d %H:%M:%S") | table Time, host, instance, Value | eval Value=round(Value,0) | rename Value AS "Free%" | rename instance AS "Drive" | rename host AS "Host"      
I have the following query that sets 'Results' based on the JSON portion of my logs below: index="internallogs" sourcetype="internal" host="*hostadress*" source="FunctionApp" (/api/assignmentse... See more...
I have the following query that sets 'Results' based on the JSON portion of my logs below: index="internallogs" sourcetype="internal" host="*hostadress*" source="FunctionApp" (/api/assignmentsearch OR AssignmentSearchResults) | eval _raw=replace(_raw,"(\\\\r\\\\n\s+)","") | eval _raw=replace(_raw,"\\\\(\")","\"") | rename additionalProperties.Key as TransactionID | rename additionalProperties.JSON as JSON | spath input=JSON path=AssignmentSearchResults{}.FullName output=RsUWName | spath input=JSON path=Headers{}.profitCenterCode{} output=RqProfitCenter | lookup uwmanagement-divisions.csv ProfitCenterCode as RqProfitCenter Outputnew ProfitCenterDescription as ProfitCenterOutput | transaction TransactionID | eval Results=case( isnull(RsUWName), "Zero Returned", mvcount(RsUWName) = 1, "Exactly One Returned", mvcount(RsUWName) > 1, "More than One Returned" ) There are three profit centers that are extracted from the logs; 1) Div1 2) Div2 and 3) Div3 How would I extract the percentages from when Results = 'Exactly One Returned' by profitCenterCode? I need this in a chart format. Please let me know what other questions you have.
Is there a way to implement multiple tabs/pages for a single dashboard? I have previously asked how to do this and I got an answer for the classic dashboard, I wanted to know if there was a way to do... See more...
Is there a way to implement multiple tabs/pages for a single dashboard? I have previously asked how to do this and I got an answer for the classic dashboard, I wanted to know if there was a way to do it for the newer Dashboard Studio version. Thanks 
does setting the following configuration in itsi_notable_event_retention.conf will send the events if limit is reached before the specified time period. For example if the event object count exceed 5... See more...
does setting the following configuration in itsi_notable_event_retention.conf will send the events if limit is reached before the specified time period. For example if the event object count exceed 500000 before the retentiontime period will the retention object go to archive?   [itsi_notable_group_user] # Default is one year retentionTimeInSec = 31536000 retentionObjectCount = 500000 disabled = 0 object_type = notable_event_group  
Hello, I have a question related to Graphs in Transaction Snapshots. Why do some snapshots have Full (blue icon) / Partial (gray icon) / Error / No graph? From where do those transactions gather... See more...
Hello, I have a question related to Graphs in Transaction Snapshots. Why do some snapshots have Full (blue icon) / Partial (gray icon) / Error / No graph? From where do those transactions gather data and yea - why do some of those transactions have full or partial graphs? Thanks! Regards, DW
Hi Team,   I want support to know why I am not able to see lookup for my created Threat Intelligence Management Source under Splunk Enterprise Security pulled from Github. I am trying to get ma... See more...
Hi Team,   I want support to know why I am not able to see lookup for my created Threat Intelligence Management Source under Splunk Enterprise Security pulled from Github. I am trying to get mac and its vendor details as intelligence after using the feature of "Threat Intelligence Management"   My configurations are below:   1. Creation of source under Threat Intelligence Manager with "Line Oriented" selection. 2. Input name mac_vendor with description as mac_vendor, type also mac_vendor with Github URL details:  3. Unchecked "Threat Intelligence" Box. 4. File Parser Auto 5. Delimiting regular expression setting as : , 6. Ignoring regular expression setting as : (^#|^\s*$) 7. field section: mac:$1,vendor:$2 8. skip header lines : 0 with rest configured as default only. Sample Event showing successful file download: INFO pid=28775 tid=MainThread file=threatlist.py:download_threatlist_file:549 | stanza="mac_ioc" retries_remaining="3" status="threat list downloaded" file="/opt/splunk/var/lib/splunk/modinputs/threatlist/mac_ioc" bytes="678565" url="https://gist.githubusercontent.com/aallan/b4bb86db86079509e6159810ae9bd3e4/raw/846ae1b646ab0f4d646af9115e47365f4118e5f6/mac-vendor.txt" What I am missing to see this information in Splunk S.A Intelligence?
We are periodically seeing spikes of Storage I/O Saturation (Monitoring Console > Resource Usage: Deployment).  When split by host we can see that this is affecting all 6 indexers nearly simultaneous... See more...
We are periodically seeing spikes of Storage I/O Saturation (Monitoring Console > Resource Usage: Deployment).  When split by host we can see that this is affecting all 6 indexers nearly simultaneously for the /opt/splunkdata mount points.  As expected, this triggers the Health Status notification throughout the day (warning or alert). To note, Load Averages are regularly > 5% with CPU usage normally under 10% for each indexer (24 cores each).  RAM usage around 30% per indexer.  We are wondering if our physical storage and/or network might be a bottleneck or if it's something on the Splunk side. For a Splunk Admin beginner, could someone please offer some suggestions on where we could start troubleshooting these spikes or explain in more detail the specifics around Storage I/O Saturation? We are on Enterprise 9.0.4 across the board and considering the recent update sooner than later. Thank you!
I have a field named "port_number"  in my results which gives multivalves as follows. source  destination port_number 3.4.5.6 22.34.56.78 1234 12.23.43.54 13.45.65.7... See more...
I have a field named "port_number"  in my results which gives multivalves as follows. source  destination port_number 3.4.5.6 22.34.56.78 1234 12.23.43.54 13.45.65.76 1234 3456 4567 8764 2345 2345 2349 12.32.43.54 65.43.21.12 7899 6788 4566 2344   Whereas query is as follows        Index= ABC | stats values(port_number) as port_number by source, destination        Now how can I make the result look like as follows  Expected Outcome :-  source  destination port_number 3.4.5.6 22.34.56.78 1234 12.23.43.54 13.45.65.76 1234 3456 Check logs for more port numbers 12.32.43.54 65.43.21.12 7899 6788 check logs for more port numbers   As you can see in the above result all I am trying to do is if there are more than 2 values in a field then I would like to add a text instead of displaying all the numbers as some results have more than 100 ports. 
I am having a below query which is providing the TPS average variance output for complete 30 days. Can you please help guide me with the logic on how to modify this query for MaxTPS variance? R... See more...
I am having a below query which is providing the TPS average variance output for complete 30 days. Can you please help guide me with the logic on how to modify this query for MaxTPS variance? Requirement is to calculate MaxTPS variance (instead of the below logic for Average TPS variance) Modification to be added: index=<search string> earliest=-30d@d date_hour>=$timefrom$ AND date_hour<$timeto$ | timechart span=$TotalMinutes $m count(eval(searchmatch("sent"))) as HotCountToday | eval TPS=round(HotCountToday/($TotalMinutes $*60),2) | eval TotalMinutes = ($timeto$ - $timefrom$) * 60 | eval Day=strftime(_time, "%Y-%m-%d") | stats max(TPS) as MaxTPS by Day Original Query: index=<search_strings> earliest=-30d@d date_hour>=$timefrom$ AND date_hour<$timeto$ | eval Date = strftime(_time, "%Y-%m-%d") | stats count(eval(Date=strftime(now(), "%Y-%m-%d"))) as HotCountToday, count(eval(Date=strftime(relative_time(now(), "-1d@d"), "%Y-%m-%d"))) as HotCountBefore1Day, count(eval(Date=strftime(relative_time(now(), "-2d@d"), "%Y-%m-%d"))) as HotCountBefore2Day, count(eval(Date=strftime(relative_time(now(), "-3d@d"), "%Y-%m-%d"))) as HotCountBefore3Day, count(eval(Date=strftime(relative_time(now(), "-4d@d"), "%Y-%m-%d"))) as HotCountBefore4Day, count(eval(Date=strftime(relative_time(now(), "-5d@d"), "%Y-%m-%d"))) as HotCountBefore5Day, count(eval(Date=strftime(relative_time(now(), "-6d@d"), "%Y-%m-%d"))) as HotCountBefore6Day, count(eval(Date=strftime(relative_time(now(), "-7d@d"), "%Y-%m-%d"))) as HotCountBefore7Day, . . count(eval(Date=strftime(relative_time(now(), "-30d@d"), "%Y-%m-%d"))) as HotCountBefore30Day by TestMQ | eval Today = strftime(now(), "%Y-%m-%d") | eval Before1Day = strftime(relative_time(now(), "-1d@d"), "%Y-%m-%d") | eval Before2Day = strftime(relative_time(now(), "-2d@d"), "%Y-%m-%d") | eval Before3Day = strftime(relative_time(now(), "-3d@d"), "%Y-%m-%d") | eval Before4Day = strftime(relative_time(now(), "-4d@d"), "%Y-%m-%d") | eval Before5Day = strftime(relative_time(now(), "-5d@d"), "%Y-%m-%d") | eval Before6Day = strftime(relative_time(now(), "-6d@d"), "%Y-%m-%d") | eval Before7Day = strftime(relative_time(now(), "-7d@d"), "%Y-%m-%d") . . | eval Before23Day = strftime(relative_time(now(), "-23d@d"), "%Y-%m-%d") | eval TotalMinutes = ($timeto$ - $timefrom$) * 60 | eval TPS_Today=round(HotCountToday/(TotalMinutes*60),3) | eval TPS_Before1Day=round(HotCountBefore1Day/(TotalMinutes*60),3) | eval TPS_Before2Day=round(HotCountBefore2Day/(TotalMinutes*60),3) | eval TPS_Before3Day=round(HotCountBefore3Day/(TotalMinutes*60),3) | eval TPS_Before4Day=round(HotCountBefore4Day/(TotalMinutes*60),3) | eval TPS_Before5Day=round(HotCountBefore5Day/(TotalMinutes*60),3) | eval TPS_Before6Day=round(HotCountBefore6Day/(TotalMinutes*60),3) | eval TPS_Before7Day=round(HotCountBefore7Day/(TotalMinutes*60),3) . . | eval TPS_Before30Day=round(HotCountBefore30Day/(TotalMinutes*60),3) | eval Variance_TPS_Today = case(TPS_Before7Day > TPS_Today, round(((TPS_Before7Day - TPS_Today) / TPS_Before7Day) * 100,3), TPS_Before7Day < TPS_Today, round(((TPS_Today - TPS_Before7Day) / TPS_Today) * 100,3), TPS_Before7Day = TPS_Today, round(((TPS_Before7Day - TPS_Today)) * 100,3)) | eval Variance_TPS_Before1Day = case(TPS_Before8Day > TPS_Before1Day, round(((TPS_Before8Day - TPS_Before1Day) / TPS_Before8Day) * 100,3), TPS_Before8Day < TPS_Before1Day, round(((TPS_Before1Day - TPS_Before8Day) / TPS_Before1Day) * 100,3), TPS_Before8Day = TPS_Before1Day, round(((TPS_Before8Day - TPS_Before1Day)) * 100,3)) | eval Variance_TPS_Before2Day = case(TPS_Before9Day > TPS_Before2Day, round(((TPS_Before9Day - TPS_Before2Day) / TPS_Before9Day) * 100,3), TPS_Before9Day < TPS_Before2Day, round(((TPS_Before2Day - TPS_Before9Day) / TPS_Before2Day) * 100,3), TPS_Before9Day = TPS_Before2Day, round(((TPS_Before9Day - TPS_Before2Day)) * 100,3)) . . . | eval Variance_TPS_Before23Day = case(TPS_Before30Day > TPS_Before23Day, round(((TPS_Before30Day - TPS_Before23Day) / TPS_Before30Day) * 100,3), TPS_Before30Day < TPS_Before23Day, round(((TPS_Before23Day - TPS_Before30Day) / TPS_Before23Day) * 100,3), TPS_Before30Day = TPS_Before23Day, round(((TPS_Before30Day - TPS_Before23Day)) * 100,3)) | eval {Today}=Variance_TPS_Today | fields - Today Variance_TPS_Today | eval {Before1Day}=Variance_TPS_Before1Day | fields - Before1Day Variance_TPS_Before1Day | eval {Before2Day}=Variance_TPS_Before2Day | fields - Before2Day Variance_TPS_Before2Day | eval {Before3Day}=Variance_TPS_Before3Day | fields - Before3Day Variance_TPS_Before3Day | eval {Before4Day}=Variance_TPS_Before4Day | fields - Before4Day Variance_TPS_Before4Day | eval {Before5Day}=Variance_TPS_Before5Day | fields - Before5Day Variance_TPS_Before5Day | eval {Before6Day}=Variance_TPS_Before6Day | fields - Before6Day Variance_TPS_Before6Day | eval {Before7Day}=Variance_TPS_Before7Day | fields - Before7Day Variance_TPS_Before7Day . . . | eval {Before23Day}=Variance_TPS_Before23Day | fields - Before23Day Variance_TPS_Before23Day | table TestMQ 2* Query Output as below: TestMQ 2023-06-23 2023-06-22 2023-06-21 2023-06-20 2023-06-19 2023-06-18 2023-06-17 2023-06-16 And so on - till 30 days MQ.NAME 5.003 17.004 25.775 19.882 32.114 56.881 10.991 85.114 .... I am new to Splunk and still learning. Looking forward to hear from you. Kindly suggest how this can be achieved. @ITWhisperer @bowesmana @xpac @MuS @yuanliu - looking forward to hear from you, please help assist.
Hello All, I need help to make build an SPL which helps to get the results of Job inspector for each query executed by end users. Example of fields: - command.search.rawdata command.search.kv co... See more...
Hello All, I need help to make build an SPL which helps to get the results of Job inspector for each query executed by end users. Example of fields: - command.search.rawdata command.search.kv command.search.index Any information or guidance will be very helpful. Thank you Taruchit
Hi, I'm trying to use index and lookup function. However values in those fields are not an exact match but those email address belongs to one person. How can i get the non exact match to work?   eg... See more...
Hi, I'm trying to use index and lookup function. However values in those fields are not an exact match but those email address belongs to one person. How can i get the non exact match to work?   eg. from index= user: email_address, team john.doe@xyz.com, blue   from file.csv: email_address, department john.doe@xyz.com.au, HR example search: "index=user | lookup "file.csv" | table email_address department 
I have these events in the logs: So far have only set up a username and pwd from the settings > all settings > credentials > manage api poller credentials but getting a 403 error - so missing som... See more...
I have these events in the logs: So far have only set up a username and pwd from the settings > all settings > credentials > manage api poller credentials but getting a 403 error - so missing something - is it my account setup?   2023-06-23 07:08:53,939 +0000 log_level=ERROR, pid=340421, tid=Thread-4, file=engine.py, func_name=_send_request, code_line_no=318 | [stanza_name="test_alerts"] The response status=403 for request which url=https://x.x.x.x:17778/SolarWinds/InformationService/v3/Json/Query?query=SELECT+EventID,EventTime,NetworkNode,NetObjectID,NetObjectValue,EngineID,EventType,Message,Acknowledged,NetObjectType,TimeStamp,DisplayName,Description,InstanceType,Uri,InstanceSiteId+FROM+Orion.Events+WHERE+EventTime>'2023-06-23 00:00:00.00' and method=GET.  
hello. myphantom.com is closed. Now how i can download iso image for vm or how i will reach community version for google cloud? Can someone help please.
Hi, Can we see queries run by another splunk user for any app  ? Does it require any extra priviledges / roles ? Please let me know. Regards, PNV
Upgrading glibc package version 2.17-326.0.5.el7_9  on Oracle Linux 7 can cause crashes. Please see https://github.com/oracle/oracle-linux/issues/90 for solution. Posting just for visibility.
  I need an API call to run a Splunk report that has already been saved and add the most recent values to the report. I do not wish to wait until the cron time is set.   I attempted to use the "d... See more...
  I need an API call to run a Splunk report that has already been saved and add the most recent values to the report. I do not wish to wait until the cron time is set.   I attempted to use the "dispatch.now" function in this api "saved/searches/name/dispatch". It started a task and executed the search; I could see the results in finished jobs, but my report was not updating with the most recent information.   I also need an API to check the status of the executed query to see if it has finished or is still running. The response from the API call instructs me to look for the parameter isdone=true, however I am unable to depend on the results because the jobs are still running when I manually check their status.  
We are looking at utilizing the "InfoSec App for Splunk" however the last version is from June of 2021 (two years ago).  Has this app been superseded by another or is there a different long term plan... See more...
We are looking at utilizing the "InfoSec App for Splunk" however the last version is from June of 2021 (two years ago).  Has this app been superseded by another or is there a different long term plan for the app?  Just wanting to know if we should continue down this path or another path. Thanks!
The below query is giving the results for 30 days MaxTPS data. (Between the time range of 2:00 to 4:00) index=<search_strings> earliest=-30d@d date_hour>=2 AND date_hour<4 | timechart span=120m cou... See more...
The below query is giving the results for 30 days MaxTPS data. (Between the time range of 2:00 to 4:00) index=<search_strings> earliest=-30d@d date_hour>=2 AND date_hour<4 | timechart span=120m count(eval(searchmatch("sent"))) as HotCountToday | eval TPS=round(HotCountToday/(120*60),2) | eval Day=strftime(_time, "%Y-%m-%d") | stats max(TPS) as MaxTPS by Day Now I want to calculate the "MaxTPS Variance" for complete 30 days. Calculate the percentage MaxTPS variance between "Today's value to last week's value" (and so on) and show the MaxTPS variance percentage. (Example: Monday to last week Monday; Sunday to last week Sunday and so on) I am new to Splunk and still learning. Looking forward to hear from you. Kindly suggest how this can be achieved. @ITWhisperer @bowesmana @xpac 
I am getting the log file imported to Splunk, but each line is an event with no field name.  Can I break up the line into columns?  If not, how do I parse the line to extract a number? Index is: ... See more...
I am getting the log file imported to Splunk, but each line is an event with no field name.  Can I break up the line into columns?  If not, how do I parse the line to extract a number? Index is: index=test_7d sourcetype=kafka:producer:bigfix Events are: 2023-06-22 09:15:44,270 root - INFO - 114510 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:37,204 root - INFO - Executing getDatafromDB 2023-06-22 09:15:35,704 root - INFO - 35205 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:33,286 root - INFO - Executing getDatafromDB 2023-06-22 09:15:32,703 root - INFO - 167996 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:22,479 root - INFO - Executing getDatafromDB 2023-06-22 09:15:19,031 root - INFO - 181 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka Each line/event starts with the date, the wordwrap is making it look incorrect.  I need to parse the bold number of each line after '- INFO -' and add a zero if no number.  I can do this with a eval, but how do I parse if there is no field name to add to the 'regex' command? Thank you for looking at this problem!