All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,   Is there right way to show timechart result span as 1day of percentage value which is calculated by stats or eval??   We have public ip total and used data as number currently. And those... See more...
Hello,   Is there right way to show timechart result span as 1day of percentage value which is calculated by stats or eval??   We have public ip total and used data as number currently. And those data is splited by data center.  So, I want to use data center as token while showing result data.   If I set data center as *, I want to get sum of every used data and total data of data center, and make it as percentage data like round(used / total * 100 , 2) and timechart those data..   I was trying to make the right command but I can't get any result with my command.   I tried like this. my base search data_center IN ($TOKEN$) | bucket span=1d _time | stats sum('ip.used') as used, sum('ip.total') as total by _time | eval usage=round(used/total * 100, 2) | timechart span=1d limit=0 values(usage)   I can't get the usage result with those command.. Could anyone let me try with right way??   Thank you..
Hello, I want total of multiple searches in timechart per week. My search in simple format last 90 days: | inputlookup abcd.csv | search host=*CC* | dedup host | stats count(host) as "List1" | app... See more...
Hello, I want total of multiple searches in timechart per week. My search in simple format last 90 days: | inputlookup abcd.csv | search host=*CC* | dedup host | stats count(host) as "List1" | appendcols [| inputlookup efgh.csv | search host=*AA* | dedup host | stats count(host) as "List2"] | appendcols [| inputlookup xyz1.csv | search host=*BB* | dedup host | stats count(host) as "List3"] | eval Total=List1+List2+List3 | timechart span=w@1w sum(Total) as "Hosts" If I run it without last timechart line, then it gives me total for 90 days or 1 week, but I need same results calculated weekly using timechart, and display total per week. 
We want to override the lookup File as per the below condition. If File not exist - we don't want to override the lookup file. And IF File exist - We can proceed to override the lookup file . ... See more...
We want to override the lookup File as per the below condition. If File not exist - we don't want to override the lookup file. And IF File exist - We can proceed to override the lookup file . index=siem_test sourcetype="db:cmdb" | timechart count by source IF CMDB part not lost don't override the output lookup and how do we can check the sourcetype is not reporting. Please suggest any condition which we can use in our search query to populate the result.
I am trying to create a splunk alert, which sends an email if a key value is missing.   host="myhost" sourcetype="access_log" "Key_Word in the access logs'" Usually i get the log entries every 30 ... See more...
I am trying to create a splunk alert, which sends an email if a key value is missing.   host="myhost" sourcetype="access_log" "Key_Word in the access logs'" Usually i get the log entries every 30 mins, i want to get alerted via an email if "Key_Word in the access logs" is missing from the access logs, can someone guide me on this?
prd-sso-data-science-711-3006-compute-role dev-1000-535-aibench-mlops-service-compute-role   above are the field values . I need to extract the code 711-3006 & 1000-535 I have given the regular ... See more...
prd-sso-data-science-711-3006-compute-role dev-1000-535-aibench-mlops-service-compute-role   above are the field values . I need to extract the code 711-3006 & 1000-535 I have given the regular expression  |rex field="Role" ".*(?<Projectcode>\d{3,4}-\d+)" but its not fetching the code properly . It give 000-535 in the second value. Kindly help
My base search provides me this result: Column_1   Column_2 ---------------------------                         Val1 A                     Val2                         Val3 ----------------... See more...
My base search provides me this result: Column_1   Column_2 ---------------------------                         Val1 A                     Val2                         Val3 ---------------------------                         Val4 B                     Val5                         Val6                         Val7 --------------------------- I want to transform value of Column 2 over Column 1. Output should be:-             A            |                  B         Val1          |               Val4         Val2          |               Val5         Val3          |               Val6                            |               Val7   I have tried chart  values(column_2) by column_1. No luck in that.
Hi @gcusello ,   Can you please guide me on the below. The requirement is like i need to integrate Bitbucket,Bamboo and UCD with splunk. I mean i have to pull logs from Bitbucket,Bamboo and UCD to... See more...
Hi @gcusello ,   Can you please guide me on the below. The requirement is like i need to integrate Bitbucket,Bamboo and UCD with splunk. I mean i have to pull logs from Bitbucket,Bamboo and UCD to Splunk and create a dashboard for the same in Splunk.  But the addons listed in Splunk base (https://splunkbase.splunk.com/app/4182/), (https://splunkbase.splunk.com/app/3440/), (https://splunkbase.splunk.com/app/2789/) is not supported for Splunk version 8 or, are unable to pull the logs using the listed addons . Can you please let me know the steps to proceed with integration without making use of splunk addons.   Thanks 
Hello,   I' m currently working on how to make dashboard with our Server's VM Count logs. Our logs are being collected as daily basis, I'm trying to show the count trend using trellis by data cent... See more...
Hello,   I' m currently working on how to make dashboard with our Server's VM Count logs. Our logs are being collected as daily basis, I'm trying to show the count trend using trellis by data center.   The command are like below. host=[HOST] index=[INDEX] sourcetype=[SRC_TYPE] source=[SRC] | timechart limit=0 span=1d sum(vm.count) as VM by center   If I make single value trellis viz with above command, I found the difference of VM count is only shown as daily basis. Like the pic attached.   I want to make trendinterval option value to dynamically change if I click time picker to change time range. Like, If I change time range to Last 90days, then showing me the difference between today and 90days ago.   How could I make it so?   Thank you.
Hey Splunksters, How can I go about getting to the next hour and 15 min - when min is 15 min past the hour for a timestamp of an event? So far I have gotten clarity for when the min is at 00 on the ... See more...
Hey Splunksters, How can I go about getting to the next hour and 15 min - when min is 15 min past the hour for a timestamp of an event? So far I have gotten clarity for when the min is at 00 on the hour and will keep that line of code. I have tried different methods, but still at square 1.   (index="123" level=pdf ) OR (index="456" ) | eval latestSub=case(level="pdf", eventTimeStamp) | eval Ingestion_Time=strftime(strptime(latestSub, "%Y-%m-%d %H:%M:%S.%3N") + 4500, "%Y-%m-%d %H:%M:%S.%3N") | stats stats dc(index) as idx values(index) as indexes values(level) as level latest(latestSub) as latestSub latest(Ingestion_Time) as Ingestion_Time by letterSubmission | where idx=1 AND indexes!="456" | fields - idx Above code only has the logic for when the min is 00 on the hour, but need to include for when latestSub  is 15 min past the hour. Any guidance on the approach is greatly appreciated.  Example:  2021-02-19 13:16:43.349028 Desired result when min is past 15 of the hour: 2021-02-19 14:15:43.349028
How do I monitor & troubleshoot if all data sources are communicating with assigned Indexers? The create a report or Alerts.
Hi, I am collecting data from Salesforce however some of the Alert logs that we wish to collect can only be collected from API version 49 and above. The Splunk add on is currently using version 48 wh... See more...
Hi, I am collecting data from Salesforce however some of the Alert logs that we wish to collect can only be collected from API version 49 and above. The Splunk add on is currently using version 48 whilst the latest API from Salesforce is 51. Is there any timeline available for when the Add on will be updated to support the newer API ?
Hi, We have several assets that have the same ending (e.g. splunkcloud.com) but the beginning changes, are we able to wildcard assets in the ES asset lookup table to something like *splunkcloud.com?
Hi , i  want to ignore some comment line and last comment  store value in field. for example  , I have log where first  3 line field is in commented for Version, Date, Software #Ver: 1.0 #Dat... See more...
Hi , i  want to ignore some comment line and last comment  store value in field. for example  , I have log where first  3 line field is in commented for Version, Date, Software #Ver: 1.0 #Date: 2020-04-18 11:10:15 #Software: ABC for Web 11.8.0-414 how to write the regex expression for this where i can store last field value my regex REGEX = ^\#  but it is dropping all lines with leading hash how to store  Software value in field but other previous  field  value can drop
Hi,  I'm having issue to deploy stream forwarder to UFs by Deployment Server. I have installed stream TA in deployment app but it doesn't work and I can't see forwarders in stream forwarder. In inpu... See more...
Hi,  I'm having issue to deploy stream forwarder to UFs by Deployment Server. I have installed stream TA in deployment app but it doesn't work and I can't see forwarders in stream forwarder. In inputs.conf I set splunk_stream_app_location with address of my stream app and also I have stream logs from my stream APP but it doesn't work on UFs. Can anybody help me with this problem? Thanks.
Hi, I'm trying to create an incident within the Alert Manager App per result row of the generating search. Let's say I have a search "Failed transactions by host". The result table looks like this:... See more...
Hi, I'm trying to create an incident within the Alert Manager App per result row of the generating search. Let's say I have a search "Failed transactions by host". The result table looks like this: _time host failed_transactions 2021-03-07 12:55:01 host_a 100 2021-03-07 12:55:01 host_b 200   It is easy to create an incident for "failed transactions" in general. But I would like to create incidents per host, that can be tracked individually.  I tried to achieve it by using $result.host$ as the title, but this did not work. Does anyone know whether this is possible?
Hello @richgalloway , I am asking your help again to get counts for below messages. I tried the same instruction but unable to get counts. From below messages get counts depends and message value. Yo... See more...
Hello @richgalloway , I am asking your help again to get counts for below messages. I tried the same instruction but unable to get counts. From below messages get counts depends and message value. Your help would be highly appreciated.  Consider message which ends with To Report. and get counts. message contains "Parker could not be processed" - Failure count message contains "Parker successfully issued" - Success Count message contains "System exception.Parker Exception Occurred " - System exception Count if message has Any other message : Partial Success get total count Total Count. PK11036791 : Parker successfully issued the 06/05/2021 renewal.,.To Report. PK11036918 : Parker successfully issued the 06/05/2021 renewal.,.To Report. PK11037082 : Parker successfully issued the 06/05/2021 renewal.,.To Report. PK01041601 : New activity on DRA for Michael Demiranda.,Please review new MVR information.,New PPA changes present.,Multiple Property policies present, please work HO.,.To Report. PK11032274 : Please review new MVR information.,.To Report. PK11036998 : Parker successfully issued the 06/05/2021 renewal.,.To Report. PK11041586 : New HO changes present.,Please review new MVR information.,New PPA changes present.,.To Report. PK11004163 : New HO changes present.,New PPA changes present.,.To Report. PK11014724 : New PPA changes present.,.To Report. PK11041665 : New HO changes present.,Please review new MVR information.,New PPA changes present.,.To Report. Parker could not be processed, please work PK Renewal. To Report. System exception.Parker Exception Occurred : Unable to extract Pending Renewal policy period for PK Policy. at Source: Invoke Workflow File: Get Data: Throw System exception.Parker Exception Occurred : Index and length must refer to a location within the string. Parameter name: length at Source: Invoke Workflow File: Make Decision: Throw
How to create Alerts for: Data Ingestion exceeding my licensed amount? Disk sizes are exceeding size on indexers? I addition how to create an alert for users exceeding their disk quotas allowed & th... See more...
How to create Alerts for: Data Ingestion exceeding my licensed amount? Disk sizes are exceeding size on indexers? I addition how to create an alert for users exceeding their disk quotas allowed & their search quotas? I really appreciate your help on this. Thx in advance
How do I check to see if all my Indexers are healthy & universal & heavy forwarders are healthy & reporting in?
I want to have drop down in my Dashboard's search result like "New Alert" "In-Progress" "Resolved" in Status field. Please help here.  
Thd old Hunk documentation (https://docs.splunk.com/Documentation/Hunk/6.4.11/Hunk/StreamingLibraries) mentions that you can create custom external result providers using virtual indexes. However Hunk... See more...
Thd old Hunk documentation (https://docs.splunk.com/Documentation/Hunk/6.4.11/Hunk/StreamingLibraries) mentions that you can create custom external result providers using virtual indexes. However Hunk has been replaced by Splunk Analytics for Hadoop, and I can find no mention of custom ERPs in the documentation. If ERPs are deprecated, are there an alternative solutions? I know you can create custom commands which act as generators, but that doesn't meet my needs. In a slide share from from Mark Groves (https://www.slideshare.net/mongodb/splunk-hunk-mongodbdaysseattlesept2014), it indicates that predicates and projections are provided to the ERP so it can work more efficiently. This is the functionality I need.