All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  Hi, new member so apologies if I miss any forum etiquette!  I'm trying to query Service Now incident data to show number of tickets opened over the last 52 weeks and compare it to the previous 52... See more...
  Hi, new member so apologies if I miss any forum etiquette!  I'm trying to query Service Now incident data to show number of tickets opened over the last 52 weeks and compare it to the previous 52 week period. I had a bit of help from a third party company to build some queries but now they have gone I can see a few issues with the numbers. The query is as follows:     index=prod_service_now sourcetype=snow:incident earliest=-52w@w1 latest=@w1 number=INC* |dedup sys_id| search dv_assignment_group=ITSOCS* NOT dv_assignment_group="ITSOCS Logistics" | eval _time = strptime(opened_at,"%Y-%m-%d%H:%M:%S") | eval now = now() | eval now = relative_time(now,"@w1") | eval earliest = now() | eval earliest = relative_time(earliest,"@w1") | eval earliest= relative_time(earliest, "-52w@w1") | where _time >= earliest AND _time <= now | eval _time = relative_time(_time,"@w1") | timechart span="1w@w1" dc(number) as current_incident_count | rename VALUE as NULL * as "* - CURRENT YEAR" | rename "_time - CURRENT YEAR" as _time | fields - "_span - CURRENT YEAR", "_spandays - CURRENT YEAR" | appendcols [ |search index=prod_service_now sourcetype=snow:incident earliest=-104w@w1 latest=-52w@w1 number=INC* | dedup sys_id |search dv_assignment_group=ITSOCS* NOT dv_assignment_group="ITSOCS Logistics"| eval _time = strptime(opened_at,"%Y-%m-%d%H:%M:%S") | eval now = now() | eval now = relative_time(now,"@w1") | eval now = relative_time(now,"-52w@w1") | eval earliest = now() | eval earliest = relative_time(earliest,"@w1") | eval earliest= relative_time(earliest, "-104w@w1") | where _time >= earliest AND _time <= now | eval _time = relative_time(_time,"@w1") | timechart span="1w@w1" dc(number) as historical_incident_count | rename VALUE as NULL * as "* - LAST YEAR" | rename "_time - LAST YEAR" as _time | fields - "_span - LAST YEAR", "_spandays - LAST YEAR"]       One obvious issue is the query limits the base search to blocks of 52 weeks based on _time which in this case is the last updated field. So, I could have tickets that will be missed if they were opened in that period but updated outside of that period. If I remove the earliest and latest parameters then the search is painfully slow and also the lines on the graph are no longer overlaid but instead they run sequentially.   Can anyone suggest a better way to do this? What I need is a line graph with 52 weeks and then the 2 series need to be on top of one another.   Hopefully I have given enough info, please shout if anything isn't clear! Thanks for reading
Hello,  I am looking for assistance developing a Splunk query that will display all users within my organization that have sent 5+ emails to either gmail, yahoo, or hotmail domains within the past h... See more...
Hello,  I am looking for assistance developing a Splunk query that will display all users within my organization that have sent 5+ emails to either gmail, yahoo, or hotmail domains within the past hour.  As an example, if John in Accounting has submitted his resignation and is now performing a data dump by sending small amounts (to evade detection) of data home, I would like to be alerted immediately. 
Hello, We started getting the attached Error, is this related to expired credentials? I can't really say what's going on   2021-01-08 10:07:19,066 level=ERROR pid=25221 tid=MainThread logger=splun... See more...
Hello, We started getting the attached Error, is this related to expired credentials? I can't really say what's going on   2021-01-08 10:07:19,066 level=ERROR pid=25221 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:67 | start_time=1610118437 datainput="XXX_AzureAD" | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 65, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 102, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/batch.py", line 47, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 125, in discover self._token.auth(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/token.py", line 56, in auth self._token = self._policy(self._resource, session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/token.py", line 37, in __call__ return self._portal.get_token_by_psk(self._client_id, self._client_secret, resource, session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 98, in get_token_by_psk raise O365PortalError(response) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 31, in __init__ self._code = data['error']['code'] TypeError: string indices must be integers    
Hi, i have connected DBConnect with SQL Servers and successfully configuring the SQL server connection on DBConnect.  Now SQL administrator is saying that he is receiving too many alerts on SQL Serv... See more...
Hi, i have connected DBConnect with SQL Servers and successfully configuring the SQL server connection on DBConnect.  Now SQL administrator is saying that he is receiving too many alerts on SQL Server as mention below.  what are this alerts can anyone help on this  
We are currently evaluating Splunk's cloud offering and the topic of concurrent searches has come up.  This is a bit of a concern for our team as one of the things we'd like to leverage Splunk for is... See more...
We are currently evaluating Splunk's cloud offering and the topic of concurrent searches has come up.  This is a bit of a concern for our team as one of the things we'd like to leverage Splunk for is alerting for various systems throughout our environment.  We're expecting around 200+ various alerts running at various intervals.  I'm assuming that we cannot be the only folks wanting to utilize Splunk this way within the cloud.   I wanted to ask the community if the concurrent limits within Splunk Cloud are ever really an issue.  We've gone round and round with our sells engineer on if this is an issue and he's mentioned that the solution will scale with the amount we index but we're not 100% convinced this solves the problem. 
Hello, Where can we find instruction to setup Docker infrastructure monitoring like in the video below? https://www.splunk.com/en_us/resources/videos/monitoring-docker.html
What happens to privately owned knowledge objects when the Splunk authentication method is switched from Native Splunk Enterprise authentication to SAML?  If the logon names are the same, will users ... See more...
What happens to privately owned knowledge objects when the Splunk authentication method is switched from Native Splunk Enterprise authentication to SAML?  If the logon names are the same, will users be able to access their private KO created before the switch?
Hello good people of the splunk community. I'm fairly new to splunk so sorry if this is a newb question.  I have a search that retrieves only events with certain field values in the Procedure_Name o... See more...
Hello good people of the splunk community. I'm fairly new to splunk so sorry if this is a newb question.  I have a search that retrieves only events with certain field values in the Procedure_Name or Process_Name fields, groups them by our scheduling cycle, and displays which procedures/processes failed (indicated by activity code not being 2000):    (index=app host=myhost sourcetype=mysourcetype) OR (index=myindex source=mysource) earliest=-1w@w latest=now | where Process_Name IN ("Process1","Process2","Process3"..."Process26") OR Procedure_Name IN ("Procedure1","Procedure2","Procedure3"..."Procedure26")) | fields Procedure_Name,Process_Name,Activity_Code, UpdatedDate | eval Procedure_Name=coalesce(Process_Name, Procedure_Name) | eval update = strptime( UpdatedDate, "%Y-%m-%d %H:%M:%S") | eval Day = relative_time(update,"@d") - if((tonumber(strftime(update, "%H%M")) < 1400), (24*60*60), 0) | dedup Procedure_Name Day | stats count(eval(Activity_Code = "2000")) as Success_Count, values(eval(if(Activity_Code !="2000", Procedure_Name,null()))) as Failures, values(Procedure_Name) as AllProcedures, values(UpdatedDate) as UpdatedDate, count as Procedure_Count by Day | eval Success_Percent = round(((Success_Count/Procedure_Count)*100),2) | sort - Day | eval Day = strftime(Day, "%F") | table Day, Success_Count, Procedure_Count, Success_Percent, Failures, AllProcedures,UpdatedDate    The process and procedure lists I'm checking for are actually identical, so Process1 is the same as Procedure1, Process6=Procedure6, etc.  However I want to account for procedures/processes that failed to run at all since we consider that a failure too. But because they didn't run there are no events for them. Is there some way to compare my list of procedures/processes that should be there to the list that's actually there(AllProcedures) and add the difference to my failures list or another list like "FailedToRun"? 
i have newly installed UF on the linux machine and splunk user/groups created. afterwards pushed apps from the deployment server which is placed at windows machine . here problem is the pushed apps... See more...
i have newly installed UF on the linux machine and splunk user/groups created. afterwards pushed apps from the deployment server which is placed at windows machine . here problem is the pushed apps at root user role and read/write permission so the scripts are not able to execute with permission denied error. after changing to splunk groups and rw access scripts are working. How to make script to splunk group automatically which are pushed from deployment server.
Hello, I have a log file where each event starts with a date, however, there are two date formats. There are multi lines in some of the events and some of the data are separated by a blank line. Upo... See more...
Hello, I have a log file where each event starts with a date, however, there are two date formats. There are multi lines in some of the events and some of the data are separated by a blank line. Upon uploading the file, Splunk thinks the blank line is the start of a new event, so for every line after that blank line, it splits the data into a new event. Here's an example: 2020-11-02 18:40:31,293+0000 some data INFO   some more data 2020-11-03 18:40:31,293+0000 some data INFO   some more data 2020-11-05 18:40:31,293+0000 some data INFO   some more data 06-FEB-2020 18:40:11.289 INFO [main} data some more data 2020-11-12 18:40:31,293+0000 some data INFO   some more data       data to look for      ___testing________         ID:0         type: Fruit         Name: Mango         Desc: Ripe 2020-11-22 18:40:31,293+0000 some data INFO   some more data      starting something new 2020-11-23 18:40:31,293+0000 some data INFO   some more data I think by telling splunk to ignore blank lines or remove it, should fix my problem as I want to keep all multiline data together within the event that starts with a date, but I haven't had much luck with getting the appropriate regex to work.  I hope the experts can help with this. Thanks in advance. 
I'm working on cleaning up permissions for knowledge objects on our search head cluster. I noticed that if I create new knowledge objects and share them with the app their settings in local.meta don'... See more...
I'm working on cleaning up permissions for knowledge objects on our search head cluster. I noticed that if I create new knowledge objects and share them with the app their settings in local.meta don't have a line for 'access ='.   Default app permissions example   [savedsearches/test-perms] export = none owner = admin version = 8.1.1 modtime = 1610106340.325106000   After manually changing permissions (even back to the default)   [savedsearches/test-perms] access = read : [ admin ], write : [ admin ] export = none owner = admin version = 8.1.1 modtime = 1610106459.558993000   but no matter what I do to objects that have had permissions set I can't seem to get them to exist without this line and just accept app default permissions.  Does anyone know a safe way to reset a knowledge objects permissions to app defaults without directly modifying local.meta on the sh-cluster members?
Hi All, Greeting for the day!! Can someone provide me any suggestion on the feasibility of integrating Nasuni appliance with Splunk.  We want to get the audit logs ingested into splunk.  
I am using the same timechart search query: 'search | timechart span=1d sum(xxx)" when I set the time range picker to yesterday preset I get a different value to when I change the time to 'week to... See more...
I am using the same timechart search query: 'search | timechart span=1d sum(xxx)" when I set the time range picker to yesterday preset I get a different value to when I change the time to 'week to date' and view the stats table the value for yesterday - could there be a reason for this?
Trying to Pick domainType and domainName from below log using the below regex: It works in regex101 but not in Splunk, it gives a blank column. domainName - rex"(?:domainName\\\"\:\\\")(?<domainN... See more...
Trying to Pick domainType and domainName from below log using the below regex: It works in regex101 but not in Splunk, it gives a blank column. domainName - rex"(?:domainName\\\"\:\\\")(?<domainName>([a-zA-Z0-9-\.]+))"  domainType - rex"(?:domainType\\\"\:\\\")(?<domainType>\w)" "payload":"{\"domainType\":\"L\",\"modifiedBy\":\"\",\"relayHost\":\"\",\"rewriteDomain\":\"\",\"wildcardAccount\":\"\",\"domainName\":\"xxx.yyyyy.com\"}"},"encoding":null,"contentType":"application/json","responseCode":null}  
Hi all,  I'm trying to create a visualisation to show the percentage of ticket status (New, Comleted, Cancelled, etc.).  I tried with this search :    | stats latest(_time) as Time by "Record Num... See more...
Hi all,  I'm trying to create a visualisation to show the percentage of ticket status (New, Comleted, Cancelled, etc.).  I tried with this search :    | stats latest(_time) as Time by "Record Number", PI_Number, PI_Event_Status     I have ticket with Record Number (id), Number (Version), Status. I want to take the last ticket event of the version and get his status. In order to count the current status of each event on the system.  The result is the following :  Record Number Number Status _time 11867114 1 Completed - Owner Action Required 1472175180 11867114 1 New 1471951740 12297522 1 Completed - Nothing Found 1477321800 12297522 1 Investigating 1475829120 12297522 1 New 1475735400 12297522 2 Completed - Error/Workaround Found 1479229260 12297522 2 New 1479198300 12297522 3 Completed - Recovery PTR Open 1482241320 12297522 3 New 1482226920   With my stats command i'm not able to retrieve only last event for each version of a ticket. How i can do that ?    Regards, Clément
Hello, please help. I have log (example) : [Information] Downtime start:08/01/2021 04:39:56.997 aaxService:NotAvailable and i would like to send email triggered by alert when this log will occur 5... See more...
Hello, please help. I have log (example) : [Information] Downtime start:08/01/2021 04:39:56.997 aaxService:NotAvailable and i would like to send email triggered by alert when this log will occur 5 times per host for 15 min? Thank you very much
I understand as per docs single value timechart command is required to put sparkline and trendline. However If I am doing the availability % of my service for example "last 24 hours" and it failed on... See more...
I understand as per docs single value timechart command is required to put sparkline and trendline. However If I am doing the availability % of my service for example "last 24 hours" and it failed once then my sparkline shows one fall but number always shows as 100% because timechart picks the stats over latest bucket while rendering the number. Currently in Splunk its like:     But ideally I would like to show it as  my search: index=network sourcetype=nt:logs | stats count(eval('Summary.Error.Code'!="")) as failed_count count as total_count by Test,_time | eval availability = 100 - ((failed_count/total_count)*100) | table Test _time availability | timechart avg(availability) by Test Any help is appreciated.
Hi, currently I have local splunk accounts for all users. Now I am setting up SAML (okta) authentication for all those user. So how can I transition each user local account to SAML account without lo... See more...
Hi, currently I have local splunk accounts for all users. Now I am setting up SAML (okta) authentication for all those user. So how can I transition each user local account to SAML account without losing there knowledge objects created including any private knowledge objects as well. Consider I have same username in both i.e. in local as well as SAML account. What process I should follow please help. Thanks,
I want to measure health of the JVMs running on a server. The gc printing is already enabled there. But need to know how exactly can we fetch from the log to identify the health.
Hi!, So my search query looks up an Excel Spreadsheet with a column called Time, that is populated with a time e.g. 10:00 AM (no date included) However, when I lookup this field in the splunk query,... See more...
Hi!, So my search query looks up an Excel Spreadsheet with a column called Time, that is populated with a time e.g. 10:00 AM (no date included) However, when I lookup this field in the splunk query, I notice that the Time Field is now associated with today's date. E.g 08-01-2021 10:00.  This is an issue as I am trying to see if the event(time without the date) has occurred between two specific date and time ranges.  I.e trying to see if an event with a time 10:00AM has occurred between 02-01-2021 and 03-01-2021. As splunk associates the time with today's date, the event is not being picked up as it is linked to today's date and thus not within the date range. Any ideas? Thanks in advance