All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Numbers always sort in numerical order.  To change the order, change the numbers by adding the year to them. | eval weeknum=strftime(_time, "%y-%V") | chart dc(Task_num) as Tasks over weeknum by STA... See more...
Numbers always sort in numerical order.  To change the order, change the numbers by adding the year to them. | eval weeknum=strftime(_time, "%y-%V") | chart dc(Task_num) as Tasks over weeknum by STATUS
You can find a list of third-party software used by Splunk at https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/Credits  The release notes for other Splunk products should have a simil... See more...
You can find a list of third-party software used by Splunk at https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/Credits  The release notes for other Splunk products should have a similar list.
Have you tried incorporating the time zone in the strptime call? | eval stime=strptime(s_time,"%Y-%m-%dT%H:%M:%S%Z")  
TIME_FORMAT is one of the "Great 8" settings every sourcetype should have.  They help ensure events are onboarded properly.  See if these settings help. [exch_file_httpproxy-mapi] ANNOTATE_PUNCT = f... See more...
TIME_FORMAT is one of the "Great 8" settings every sourcetype should have.  They help ensure events are onboarded properly.  See if these settings help. [exch_file_httpproxy-mapi] ANNOTATE_PUNCT = false LINE_BREAKER = ([\r\n]+)\d\d\d\d-\d\d INDEXED_EXTRACTIONS = csv initCrcLength = 2735 HEADER_FIELD_LINE_NUMBER = 1 MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = DateTime TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N%Z TRANSFORMS-no_column_headers = no_column_headers EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)\d\d\d\d-\d\d TRUNCATE = 10000  
I have Log Analytics deployed through the agent machine using JOBs and I parse it through the grok expression. However, I noticed that I also receive data in the database that clearly do not match, w... See more...
I have Log Analytics deployed through the agent machine using JOBs and I parse it through the grok expression. However, I noticed that I also receive data in the database that clearly do not match, which means that they do not have an ERROR logLevel. Which I don't want to parse into columns, but I don't even want to have them in the database due to capacity. grok patterns: - "%{TIMESTAMP_ISO8601:logEventTimestamp}%{SPACE}\\[%{NUMBER:logLevelId}\\]%{SPACE}%{LOGLEVEL:logLevel}%{SPACE}-%{SPACE}%{GREEDYDATA:msg}" pattern.grok: LOGLEVEL ([Ee]rr?(?:or)?|ERR?(?:OR)?) Requered data: Unnecessary data:  I would be interested in how to get rid of them, or where they can be used clause where or a filter?
Do any versions of splunk and splunk products utilize python-werkzeug?
Hi, I have two fields, where time zone seems to be different.. please could you help me to get difference ?  itime= 2024-02-22 20:56:02,185 stime=2024-02-23T01:56:02Z I tried the below but it... See more...
Hi, I have two fields, where time zone seems to be different.. please could you help me to get difference ?  itime= 2024-02-22 20:56:02,185 stime=2024-02-23T01:56:02Z I tried the below but it always gives around 5 hrs delay.. SPL: | eval itime=strptime(Initial_Time,"%Y-%m-%d %H:%M:%S,%3N") | eval stime=strptime(s_time,"%Y-%m-%dT%H:%M:%S") | eval difference = abs(itime - stime) | eval time_difference=tostring(difference, "duration")
Hello, I have requirement to get logs into Splunk from Snowflake. I have no idea where to start from. I came across Splunk docs using Splunk DB connect.https://docs.splunk.com/Documentation/DBX/3.1... See more...
Hello, I have requirement to get logs into Splunk from Snowflake. I have no idea where to start from. I came across Splunk docs using Splunk DB connect.https://docs.splunk.com/Documentation/DBX/3.15.0/DeployDBX/Installdatabasedrivers  Can you'll guide me how do i get started here? How do I get logs from Snowflake into Splunk. Can I use HEC token to- get the logs?
Limits are configured in limits.conf  limits.conf - Splunk Documentation
You can only use three fields for xyseries, the x-axis, the y-axis and the series (names) - hence the name of the command! It is similar in that respect to the chart command. Try something like this ... See more...
You can only use three fields for xyseries, the x-axis, the y-axis and the series (names) - hence the name of the command! It is similar in that respect to the chart command. Try something like this index="myindex" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "R(?<Respcode>\[\d+\]\[\d+\])" | bin _time span=1d | stats count as Respcode_count by Respcode,servername,_time | eval {servername} = Respcode_count | fields - servername Respcode_count | stats values(*) as * by _time Respcode | fillnull value=0
Hello   What's the officall Limit of Query Results in Splunk? Is this also written somewhere on the Splunk Website? kind regards Christian
Thanks for your reply. I think you are right on the example data I made some mistakes there. and the query works even with multiple ID's so thank you very much. 
Is TIME_FORMAT necessarily? Events that are not broken haven't problem with timestamp determination  This is the log example that contains headers, a preamble, and one event DateTime,RequestId,Ma... See more...
Is TIME_FORMAT necessarily? Events that are not broken haven't problem with timestamp determination  This is the log example that contains headers, a preamble, and one event DateTime,RequestId,MajorVersion,MinorVersion,BuildVersion,RevisionVersion,ClientRequestId,Protocol,UrlHost,UrlStem,ProtocolAction,AuthenticationType,IsAuthenticated,AuthenticatedUser,Organization,AnchorMailbox,UserAgent,ClientIpAddress,ServerHostName,HttpStatus,BackEndStatus,ErrorCode,Method,ProxyAction,TargetServer,TargetServerVersion,RoutingType,RoutingHint,BackEndCookie,ServerLocatorHost,ServerLocatorLatency,RequestBytes,ResponseBytes,TargetOutstandingRequests,AuthModulePerfContext,HttpPipelineLatency,CalculateTargetBackEndLatency,GlsLatencyBreakup,TotalGlsLatency,AccountForestLatencyBreakup,TotalAccountForestLatency,ResourceForestLatencyBreakup,TotalResourceForestLatency,ADLatency,SharedCacheLatencyBreakup,TotalSharedCacheLatency,ActivityContextLifeTime,ModuleToHandlerSwitchingLatency,ClientReqStreamLatency,BackendReqInitLatency,BackendReqStreamLatency,BackendProcessingLatency,BackendRespInitLatency,BackendRespStreamLatency,ClientRespStreamLatency,KerberosAuthHeaderLatency,HandlerCompletionLatency,RequestHandlerLatency,HandlerToModuleSwitchingLatency,ProxyTime,CoreLatency,RoutingLatency,HttpProxyOverhead,TotalRequestTime,RouteRefresherLatency,UrlQuery,BackEndGenericInfo,GenericInfo,GenericErrors,EdgeTraceId,DatabaseGuid,UserADObjectGuid,PartitionEndpointLookupLatency,RoutingStatus #Software: Microsoft Exchange Server #Version: 15.02.1118.040 #Log-type: HttpProxy Logs #Date: 2024-02-20T14:00:01.019Z #Fields: DateTime,RequestId,MajorVersion,MinorVersion,BuildVersion,RevisionVersion,ClientRequestId,Protocol,UrlHost,UrlStem,ProtocolAction,AuthenticationType,IsAuthenticated,AuthenticatedUser,Organization,AnchorMailbox,UserAgent,ClientIpAddress,ServerHostName,HttpStatus,BackEndStatus,ErrorCode,Method,ProxyAction,TargetServer,TargetServerVersion,RoutingType,RoutingHint,BackEndCookie,ServerLocatorHost,ServerLocatorLatency,RequestBytes,ResponseBytes,TargetOutstandingRequests,AuthModulePerfContext,HttpPipelineLatency,CalculateTargetBackEndLatency,GlsLatencyBreakup,TotalGlsLatency,AccountForestLatencyBreakup,TotalAccountForestLatency,ResourceForestLatencyBreakup,TotalResourceForestLatency,ADLatency,SharedCacheLatencyBreakup,TotalSharedCacheLatency,ActivityContextLifeTime,ModuleToHandlerSwitchingLatency,ClientReqStreamLatency,BackendReqInitLatency,BackendReqStreamLatency,BackendProcessingLatency,BackendRespInitLatency,BackendRespStreamLatency,ClientRespStreamLatency,KerberosAuthHeaderLatency,HandlerCompletionLatency,RequestHandlerLatency,HandlerToModuleSwitchingLatency,ProxyTime,CoreLatency,RoutingLatency,HttpProxyOverhead,TotalRequestTime,RouteRefresherLatency,UrlQuery,BackEndGenericInfo,GenericInfo,GenericErrors,EdgeTraceId,DatabaseGuid,UserADObjectGuid,PartitionEndpointLookupLatency,RoutingStatus 2024-02-20T14:00:00.980Z,c3581a8e-2033-4fa0-8dbf-3efdc06ba7c3,15,2,1118,40,{5745B4EE-6A69-4E12-8EBD-6AD2820CA5D1},Mapi,mail.domain.com,/mapi/nspi/,,,false,,,,Microsoft Office/15.0 (Windows NT 10.0; Microsoft Outlook 15.0.5589; Pro),172.16.5.94,SERVERMBX06,401,,,POST,,,,,,,,,13,,,,,,,,,,,,,,,38,,,,,,,,,,,,,,0,,0,0,,?MailboxId=5918ae5a-9281-4301-b94e-407395ba2824@domain.com,,BeginRequest=2024-02-20T14:00:00.980Z;CorrelationID=<empty>;SharedCacheGuard=0;EndRequest=2024-02-20T14:00:00.980Z;S:ServiceLatencyMetadata.AuthModuleLatency=0,,,,,,  
Hi Team, As we need to monitor memory by process for each windows hosts. As checked we couldn't find any processes for memory and instances value is showing only "0" but we are monitoring CPU by p... See more...
Hi Team, As we need to monitor memory by process for each windows hosts. As checked we couldn't find any processes for memory and instances value is showing only "0" but we are monitoring CPU by process for hosts, it contains instance field and we could see the processes name for CPU. We verified the inputs.conf stanzas everything is looking good need your help to configure this memory processes in the instances field. Below are query used to populate CPU utilization by process index=perfmon source="Perfmon:Process" counter="% Processor Time" instance!=_total instance!=idle | timechart limit=20 useother=f span=10m max(Value) as cputime by instance So, we need same for memory utilization by process, and the counter = "% Committed Bytes In Use" Thanks
  Hello Splunkers!! I want us to configure Active Directory in Splunk with LDAP. My Splunk server and domain controller are two different servers on the same network. Please guide me on what st... See more...
  Hello Splunkers!! I want us to configure Active Directory in Splunk with LDAP. My Splunk server and domain controller are two different servers on the same network. Please guide me on what steps I need to follow. 1. Shall I open Inbound or outbound  port  389 on both the servers ? 2. How to create user and user group in Active directory. 3. After the mapping of LDAP, does it impact the current existing Splunk users ? 4. Please provide me  document if anybody performed POC on this already.    
@ITWhisperer Here is the current query, and when i add _time in xyseries it will show resp ode as columns instead row:: index="myindex"  | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "... See more...
@ITWhisperer Here is the current query, and when i add _time in xyseries it will show resp ode as columns instead row:: index="myindex"  | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "R(?<Respcode>\[\d+\]\[\d+\])" | bin _time span=1d | stats count as Respcode_count by Respcode,servername,_time | xyseries Respcode ,servername,Respcode_count   Current output: Respcodes    Srv1      Srv2       Srv3       Srv4  …. 200                    80           10          100       42 400                    12           55            11         0 500                     11           34             2          8 …   expected output: Date.                  Respcodes    Srv1      Srv2       Srv3       Srv4  …. 2024/02/23  200                    80           10          100       42 2024/02/24  200                    70           19            11        11 2024/02/23  400                    12           55            11         0 2024/02/24  400                    44           14            46         89 2024/02/23   500                    11           34             2          8 2024/02/24   500                     11           34             2          9               … any idea?
Hi, Does SPLUNK have  document template in capturing Dashboard requirements from users?  Below points is request to the user .what they are looking for in a Dashboard.Is any other point to be add.ne... See more...
Hi, Does SPLUNK have  document template in capturing Dashboard requirements from users?  Below points is request to the user .what they are looking for in a Dashboard.Is any other point to be add.need a sample template. 1. Identify stakeholders’ goals to suggest dashboard metrics ... 2. Take a questions-first approach ... 3. Understand how end users interact with the dashboard ... 4. Identify which KPIs are the most important ... 5. Create a step-by-step workflow ... 6. Collaborate on the dashboard layout in advance ... 7. Choose the dashboard type before creating it ... 8. Repurpose analyzed data to create useful dashboards ...   Thanks, Karthi
With nearly 19k entries in your lookup table you have probably blown some lime - try splitting up your searches. For example, you could use head and tail to reduce the number of rows returned.
index=my_index [| inputlookup blank_clients.csv | table ClientName | rename ClientName AS search] "Certificate was successfully validated" If I execute just this code, I get all the ClientName entri... See more...
index=my_index [| inputlookup blank_clients.csv | table ClientName | rename ClientName AS search] "Certificate was successfully validated" If I execute just this code, I get all the ClientName entries: | inputlookup blank_clients.csv | table ClientName | rename ClientName AS search  
@sivaranjiniG  Python 3.7.17 is no longer officially supported as of June 2023. This means: Security updates: It won't receive any further security patches, making it vulnerable to potential securi... See more...
@sivaranjiniG  Python 3.7.17 is no longer officially supported as of June 2023. This means: Security updates: It won't receive any further security patches, making it vulnerable to potential security risks. Bug fixes: No bug fixes will be addressed, so you might encounter issues that won't be resolved. Binary installers: These are no longer provided, so installation can be more complex and require building from source. While Python 3.7.17 might still function for some basic tasks, using an unsupported version is not recommended due to the security and maintenance risks mentioned above. It's strongly advised to upgrade to a currently supported Python version, such as 3.11 or 3.10. You can find more information and download links on the official Python website: https://www.python.org/downloads/