All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I will preface by saying I am very new to using Splunk. We have recently did a rebuild of our environment and I noticed that one of our log sources does not return formatted logs the same way our oth... See more...
I will preface by saying I am very new to using Splunk. We have recently did a rebuild of our environment and I noticed that one of our log sources does not return formatted logs the same way our other log sources do. Whenever I try and do a query for AMP (Cisco Secure Endpoint) I have to click 'Show as raw text' to see any data which does not seem right to me.  I have been trying to extract fields using Rex as well and it just does not seem to be working and I'm not sure if it has something to do with how the logs are displaying when I do a query. Could someone point me in the right direction?
I have a search for which I need to tune out a large number of values (about 25) in a proctitle command field.  Currently using: NOT proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") I... See more...
I have a search for which I need to tune out a large number of values (about 25) in a proctitle command field.  Currently using: NOT proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") I'm worried about performance on the search head and am looking for ways to lower the CPU and memory burden. I have two possible solutions: 1) Create a data model and place this search as a constraint. 2) Tag events on ingest with proctitle IN ("*<proc1>*", "*<proc2>*", ......., "*<proc25>*") and use this tag as a constraint in the data model. I've played with #1.  Is #2 possible, and is there a more efficient way to do this? Thanks in advance.
I have a query that counts totals for each day for the past 7 days and produces these results: 2, 0, 2, 0, 0, 0, 0. No matter what I do, the SINGLE with timechart and trendlines enabled produced igno... See more...
I have a query that counts totals for each day for the past 7 days and produces these results: 2, 0, 2, 0, 0, 0, 0. No matter what I do, the SINGLE with timechart and trendlines enabled produced ignores the trailing zeros and displays a 2, with a trendling of increasing 2. It should diplay a zero with a zero trend line representing the last two segments (both zero). Before the main query (as recommended) I have used the | makeresults earliest"-7d@d" count =0 to ensure the days with zero count are included. I have tried the suggested appendpipe option: | appendpipe [| stats count | where count=0 | addinfo | eval _time=info_min_time | table _time count] and the appendpipe with max(count) option: | appendpipe [| stats count | where count=0 | addinfo | eval time=info_min_time." ".info_max_time | table time count | makemv time | mvexpand time | rename time as _time | timechart span=1d max(count) as count] Neither create the correct timechart. From the dashboard in the Edit UI mode, if I click on the query magnifying glass and open in a new tab, the results do NOT diplay the trailing zeros. If I copy and paste the query into a search bar with the time picker set to All Time, I get the correct values: 2, 0, 2, 0, 0, 0, 0. Is there an option setting I may have wrong? How do I fix this?
I'm seeing this error from the _internal index in the web_service.log and the python.log: "startup:116 - Unable to read in product version information; isSessionKeyDefined=True error=[HTTP 401] Cl... See more...
I'm seeing this error from the _internal index in the web_service.log and the python.log: "startup:116 - Unable to read in product version information; isSessionKeyDefined=True error=[HTTP 401] Client is not authenticated" Does anyone have more information on this error?
Hi I would like to have the citrix cloud add-on to be installed into the Splunk cloud,how can I achieve this?
Hi Team, how can I ingest Genesys cloud logs into splunk? I see two apps 1.  Genesys Pulse Add-on for Splunk https://splunkbase.splunk.com/app/5255 2. Genesys Cloud Operational Analytics App  ht... See more...
Hi Team, how can I ingest Genesys cloud logs into splunk? I see two apps 1.  Genesys Pulse Add-on for Splunk https://splunkbase.splunk.com/app/5255 2. Genesys Cloud Operational Analytics App  https://splunkbase.splunk.com/app/6552 For the Genesys Pulse Add-on for Splunk, I was able to see here we need to setup the configuration but for Genesys Cloud Operational Analytics App i dont follow the setup configuration.   Which app should be used for Genesys Cloud log ingestion into splunk Cloud?
Hi fellow Splunkers, i recently came across an authentication Token created by splunk-system-user and i had no clue where this token came from and my splunkadmin colleagues didnt created the token... See more...
Hi fellow Splunkers, i recently came across an authentication Token created by splunk-system-user and i had no clue where this token came from and my splunkadmin colleagues didnt created the token either. Is it a feature/normal that Splunk will generate a Token every single time you click on "view on mobile" from the menu of a xml dashboard? Can we turn it off?  We dont want users to be able to freely create an infinite amount of authentication tokens, because it would make overview of tokens way harder and we dont have configured the secure gateway. 
I have smart card authentication enabled on my onprem enterprise system.  I'm using the built in capability that Splunk has now, not using Apache.  Been working great but when I upgraded my system fr... See more...
I have smart card authentication enabled on my onprem enterprise system.  I'm using the built in capability that Splunk has now, not using Apache.  Been working great but when I upgraded my system from 9.0.3 to 9.2.1, it get an Unauthorized error when trying to logon.  I changed requireclientcert to false so I could logon with username and password.  Checked all my LDAP settings and everything looks the same.  Even added another DNS to see if that would change anything but no luck, still getting the unauthorized error.
I'm trying to install “Cisco Networks App for Splunk Enterprise” and “Cisco Networks Add-on for Splunk Enterprise” in SplunkCloud  Version:9.1.2308.203Build:d153a0fad666 but is not possible. When I ... See more...
I'm trying to install “Cisco Networks App for Splunk Enterprise” and “Cisco Networks Add-on for Splunk Enterprise” in SplunkCloud  Version:9.1.2308.203Build:d153a0fad666 but is not possible. When I try and search them they do not appear, and If I try to upload them I receive a message informing: "This app is available for installation directly from Splunkbase. To install this app, use the App Browser page in Splunk Web." but there is nowhere to be found.
Hello Everyone I'm trying to calculate the "time_difference" between one column and another in Splunk. The problem is that the value from which I substract something is current time and when I use t... See more...
Hello Everyone I'm trying to calculate the "time_difference" between one column and another in Splunk. The problem is that the value from which I substract something is current time and when I use the current time value it is shown in a table as one event (epoch_current_time). Therefore when I substract value "epoch_password_last_modified" from "epoch_current_time" i get no results. Is there a way to make "epoch_current_time" visible each time in each row like "epoch_password_last_modified" value?  
I have a dataset of user data including the user's LastLogin. The LastLogin field is slightly oddly formatted but very regular in it's pattern. I wish to calculate the number of days since LastLogin.... See more...
I have a dataset of user data including the user's LastLogin. The LastLogin field is slightly oddly formatted but very regular in it's pattern. I wish to calculate the number of days since LastLogin. This should be super simple. What is bizarre is that in a contrived example using makeresults it works perfectly.   | makeresults | eval LastLogin="Mar 20, 2024, 16:40" | eval lastactive=strptime(LastLogin, "%b %d, %Y, %H:%M") | eval dayslastactive=round((now() - lastactive) / 86000, 0)   This yields: But with the actual results the same transformations do not work.    | inputlookup MSOLUsers | where match(onPremisesDistinguishedName, "OU=Users") | where not isnull(LastLogin) | eval LastActive=strptime(LastLogin, "%b %d, %Y, %H:%M") | eval DaysLastActive=round((now() - LastActive) / 86000, 0) | fields Company, Department, DisplayName, LastLogin, LastActive, DaysLastActive   This yields: What am I missing? Cutting and pasting the strings into the makeresults form gives what I would expect.     
Hello, I am looking for my search results for only 6pm to 9pm over the last 90 days. How can I achieve this with the below query? index=* host=* | eval pctCPU=if(CPU="all",100-pctIdle,Value) | tim... See more...
Hello, I am looking for my search results for only 6pm to 9pm over the last 90 days. How can I achieve this with the below query? index=* host=* | eval pctCPU=if(CPU="all",100-pctIdle,Value) | timechart avg(pctCPU) AS avgCPU BY host
Below I provided a sample trace where we have message with below format  Error_Request_Response for URI: {}, and Exception Occurred: {} I want to create table with fields URI and Exception Occurr... See more...
Below I provided a sample trace where we have message with below format  Error_Request_Response for URI: {}, and Exception Occurred: {} I want to create table with fields URI and Exception Occurred in a table traceId uri exception aaa bbb ccc Can some guide me how to do it. {"@timestamp":"2024-04-02T11:27:21.745Z","uri":"","level":"ERROR","service":"product-aggregator-models","traceId":"660beb9951d7a6d29b2480e4450d8d82","spanId":"fd29a51914f47434","parentSpanId":"","pid":"1","thread":"reactor-http-epoll-2","class":"c.h.e.p.client.v20.ProductServiceClient","message":"Error_Request_Response for URI: https://www.bbb.com/idvh/rest/Products/2171815?limit=50&offset=0&IsOnExchange=false&includeRates=true&channelWorkflow=Agency&isSamsClubPlan=false, and Exception Occurred: ExceptionStateModel(code=404 NOT_FOUND, description=Requested resource not found, timestamp=2024-04-02T11:27:21.745328361, status=404, path=null, traceId=null, causes=[ExceptionStateModel(code=404 NOT_FOUND, description=Response body is null or Empty, timestamp=2024-04-02T11:27:21.745303692, status=404, path=https://www.bbb.com/idvh/rest/Products/2171815?limit=50&offset=0&IsOnExchange=false&includeRates=true&channelWorkflow=Agency&isSamsClubPlan=false, traceId=null, causes=null)])","stack_trace":"com.bbb.eai.chassis.exceptions.BaseAPIException: null\n\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionObject(BaseWebClient.java:279)\n\tSuppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: \nError has been observed at the following site(s):\n\t*__checkpoint ⇢ 404 NOT_FOUND from GET https://www.bbb.com/idvh/rest/Products/2171815 [DefaultWebClient]\nOriginal Stack Trace:\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionObject(BaseWebClient.java:279)\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionStateModel(BaseWebClient.java:206)\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.lambda$handleExceptions$1(BaseWebClient.java:199)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132)\n\t\tat reactor.core.publisher.Operators$BaseFluxToMonoOperator.completePossiblyEmpty(Operators.java:2071)\n\t\tat reactor.core.publisher.MonoHasElement$HasElementSubscriber.onComplete(MonoHasElement.java:96)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onComplete(MonoNext.java:102)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:260)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:413)\n\t\tat reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:431)\n\t\tat reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:485)\n\t\tat reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:712)\n\t\tat reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:114)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1466)\n\t\tat io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1329)\n\t\tat io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1378)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\t\tat io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)\n\t\tat io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:509)\n\t\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:407)\n\t\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)\n\t\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\t\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\t\tat java.base/java.lang.Thread.run(Thread.java:840)\n"}
Hey, I have a problem preparing a Splunjk query. Could you assist me? I have a simple query that returns a table with a few fields:     some-search | fields id, time | table id, time     I a... See more...
Hey, I have a problem preparing a Splunjk query. Could you assist me? I have a simple query that returns a table with a few fields:     some-search | fields id, time | table id, time     I also have a macro with two arguments (id and time) that returns a table with status and type fields. I want to modify the first query somehow to run a subquery for each row by calling my macro and appending the fields to the final table. Finally, I want to have a table with four fields: id, time, status, and type (where status and type were obtained by calling a subquery with id and time). Is it possible?
Hello,   I'm using Splunk Enterprise 9.1.2 on my local Linux machine (in a docker container). When documenting a new custom command in the searchbnf.conf file in my Splunk app, one of the fields... See more...
Hello,   I'm using Splunk Enterprise 9.1.2 on my local Linux machine (in a docker container). When documenting a new custom command in the searchbnf.conf file in my Splunk app, one of the fields is called "description". However, I can't seem to find where this field is actually used? The command help that pops up when I start typing the command in the search assistant in Splunk Web only shows the "shortdesc" field. This is also true for command from apps installed from SplunkBase. I'm attaching a screenshot that shows what I see what I start typing a command. In this case I'm using the "summary" command from the Splunk ML app. The entry in searchbnf.conf is: [summary-command] syntax = summary <model_name> shortdesc = Returns a summary of a machine learning model that was learned using the "fit" command. description = The summary is algorithm specific. For <LinearRegression>, \ the summary is a list of coefficients. For \ <LogisticRegression>, the summary is a list of \ coefficients for each class. comment1 = Inspect a previously-learned LinearRegression model, "errors_over_time": example1 = | summary errors_over_time usage = public related = fit apply listmodels deletemodel As can be seen, only "shortdesc" shows in the little help window that pops up. Is this normal? Is there something to change in some configuration to make the full description show? Is there a different place I can find the full description for the command? FYI, the "Learn more" link goes to http://localhost:8000/en-US/help?location=search_app.assist.summary which gets automatically redirected to https://docs.splunk.com/Documentation/MLApp/5.4.1/User/Customsearchcommands?ref=hk#summary. However, for my custom command it gets redirected to https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/3rdpartycustomcommands?ref=hk so not relevant... Thanks!
Hello Splunk Community, I'm seeking assistance with ingesting data from Tenable to my Splunk AIO instance. Every time I attempt to configure the TA for Tenable, I encounter HTTPS errors. I suspect... See more...
Hello Splunk Community, I'm seeking assistance with ingesting data from Tenable to my Splunk AIO instance. Every time I attempt to configure the TA for Tenable, I encounter HTTPS errors. I suspect there may be an error in the steps I'm following, and I would greatly appreciate someone guiding me through the process step by step. Additionally, I'm looking for guidance on how to ensure that the dashboards are running properly on the Tenable app within Splunk.
Hi Guys, I am using timeline visualization in my Splunk dashboard to show total elapsed time. But in some times its not good. Is there any examples to show Total elapsed time in different visualizat... See more...
Hi Guys, I am using timeline visualization in my Splunk dashboard to show total elapsed time. But in some times its not good. Is there any examples to show Total elapsed time in different visualization .Is there any best way to show Total Elapsed time in graph in splunk dashboard.   IN my splunk Query: |eventstats min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") IN my dashboard: | sort -Timestamp | eval ElapsedTimeInSecs=ElapsedTimeInSecs*1000 | table Timestamp correlationId Status ElapsedTimeInSecs    
Hi, I am working in a distributed Splunk environment with one search head and an indexer cluster. I am trying to monitor a path that is on the search head and I created a monitor input from the web... See more...
Hi, I am working in a distributed Splunk environment with one search head and an indexer cluster. I am trying to monitor a path that is on the search head and I created a monitor input from the web GUI. How do I create an index on the indexer cluster and configure forwarding from the search head to the indexer cluster. Please help me. Thanks   
I am inputting my regex in PCRE format and it's still not working. I am trying to exclude all static resources in one Regex. (?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|y... See more...
I am inputting my regex in PCRE format and it's still not working. I am trying to exclude all static resources in one Regex. (?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|env|gz|bak)$
Hello Everyone, We have installed Splunk Enterprise on individual servers for each individual Splunk component in temporary site and moved it to permanent location. In temporary site we have named... See more...
Hello Everyone, We have installed Splunk Enterprise on individual servers for each individual Splunk component in temporary site and moved it to permanent location. In temporary site we have named indexers and cluster master with "-t" (i.e., xxlsplunkcm1-t & xxlsplunkidx01-t,02-t) indicating the temporary. Since the Indexers are physical servers we are building the new physical Servers and naming them as xxlsplunkidx01 & 02 respectively. We want to rename the Cluster master as xxlsplunkcm1. Can you help me, what are the validation step and pre check to be made in order to have a healthy Splunk environment.