All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a dataset of user data including the user's LastLogin. The LastLogin field is slightly oddly formatted but very regular in it's pattern. I wish to calculate the number of days since LastLogin.... See more...
I have a dataset of user data including the user's LastLogin. The LastLogin field is slightly oddly formatted but very regular in it's pattern. I wish to calculate the number of days since LastLogin. This should be super simple. What is bizarre is that in a contrived example using makeresults it works perfectly.   | makeresults | eval LastLogin="Mar 20, 2024, 16:40" | eval lastactive=strptime(LastLogin, "%b %d, %Y, %H:%M") | eval dayslastactive=round((now() - lastactive) / 86000, 0)   This yields: But with the actual results the same transformations do not work.    | inputlookup MSOLUsers | where match(onPremisesDistinguishedName, "OU=Users") | where not isnull(LastLogin) | eval LastActive=strptime(LastLogin, "%b %d, %Y, %H:%M") | eval DaysLastActive=round((now() - LastActive) / 86000, 0) | fields Company, Department, DisplayName, LastLogin, LastActive, DaysLastActive   This yields: What am I missing? Cutting and pasting the strings into the makeresults form gives what I would expect.     
Hello, I am looking for my search results for only 6pm to 9pm over the last 90 days. How can I achieve this with the below query? index=* host=* | eval pctCPU=if(CPU="all",100-pctIdle,Value) | tim... See more...
Hello, I am looking for my search results for only 6pm to 9pm over the last 90 days. How can I achieve this with the below query? index=* host=* | eval pctCPU=if(CPU="all",100-pctIdle,Value) | timechart avg(pctCPU) AS avgCPU BY host
The fields are probably multivalue fields for some of your transactions which is why the eval is not working. You should probably start again with your events and work out how to break them up into s... See more...
The fields are probably multivalue fields for some of your transactions which is why the eval is not working. You should probably start again with your events and work out how to break them up into separate parts so that you can create the composite id, in other words, to get where you want to be, don't start from where you are, you need to go back some steps.
Thanks will do. 
The second search (macro) is like: some-search-index $id$ | eval epoch = _time | where epoch < $timestamp$ | sort BY _time | head 1 | fields id, status, type | table id, status, type
That error usually is seen when an indexer stops while processing a search.  What relevant messages do you seen in search.log?  Are there any core files from the indexers? The local SSD provides onl... See more...
That error usually is seen when an indexer stops while processing a search.  What relevant messages do you seen in search.log?  Are there any core files from the indexers? The local SSD provides only 8 days of storage.  Any query that searches data more than 8 days old will have to fetch the data from SmartStore, which will slow processing of the search.  Splunk recommends the local cache be large enough to hold at least 30 days of data.
Below I provided a sample trace where we have message with below format  Error_Request_Response for URI: {}, and Exception Occurred: {} I want to create table with fields URI and Exception Occurr... See more...
Below I provided a sample trace where we have message with below format  Error_Request_Response for URI: {}, and Exception Occurred: {} I want to create table with fields URI and Exception Occurred in a table traceId uri exception aaa bbb ccc Can some guide me how to do it. {"@timestamp":"2024-04-02T11:27:21.745Z","uri":"","level":"ERROR","service":"product-aggregator-models","traceId":"660beb9951d7a6d29b2480e4450d8d82","spanId":"fd29a51914f47434","parentSpanId":"","pid":"1","thread":"reactor-http-epoll-2","class":"c.h.e.p.client.v20.ProductServiceClient","message":"Error_Request_Response for URI: https://www.bbb.com/idvh/rest/Products/2171815?limit=50&offset=0&IsOnExchange=false&includeRates=true&channelWorkflow=Agency&isSamsClubPlan=false, and Exception Occurred: ExceptionStateModel(code=404 NOT_FOUND, description=Requested resource not found, timestamp=2024-04-02T11:27:21.745328361, status=404, path=null, traceId=null, causes=[ExceptionStateModel(code=404 NOT_FOUND, description=Response body is null or Empty, timestamp=2024-04-02T11:27:21.745303692, status=404, path=https://www.bbb.com/idvh/rest/Products/2171815?limit=50&offset=0&IsOnExchange=false&includeRates=true&channelWorkflow=Agency&isSamsClubPlan=false, traceId=null, causes=null)])","stack_trace":"com.bbb.eai.chassis.exceptions.BaseAPIException: null\n\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionObject(BaseWebClient.java:279)\n\tSuppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: \nError has been observed at the following site(s):\n\t*__checkpoint ⇢ 404 NOT_FOUND from GET https://www.bbb.com/idvh/rest/Products/2171815 [DefaultWebClient]\nOriginal Stack Trace:\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionObject(BaseWebClient.java:279)\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionStateModel(BaseWebClient.java:206)\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.lambda$handleExceptions$1(BaseWebClient.java:199)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132)\n\t\tat reactor.core.publisher.Operators$BaseFluxToMonoOperator.completePossiblyEmpty(Operators.java:2071)\n\t\tat reactor.core.publisher.MonoHasElement$HasElementSubscriber.onComplete(MonoHasElement.java:96)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onComplete(MonoNext.java:102)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:260)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:413)\n\t\tat reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:431)\n\t\tat reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:485)\n\t\tat reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:712)\n\t\tat reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:114)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1466)\n\t\tat io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1329)\n\t\tat io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1378)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\t\tat io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)\n\t\tat io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:509)\n\t\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:407)\n\t\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)\n\t\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\t\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\t\tat java.base/java.lang.Thread.run(Thread.java:840)\n"}
Hi @abroun, to help you I need also the second search. In few words, you have to correlate results from both the searches using stats BY common key. Ciao. Giuseppe
Hey, I have a problem preparing a Splunjk query. Could you assist me? I have a simple query that returns a table with a few fields:     some-search | fields id, time | table id, time     I a... See more...
Hey, I have a problem preparing a Splunjk query. Could you assist me? I have a simple query that returns a table with a few fields:     some-search | fields id, time | table id, time     I also have a macro with two arguments (id and time) that returns a table with status and type fields. I want to modify the first query somehow to run a subquery for each row by calling my macro and appending the fields to the final table. Finally, I want to have a table with four fields: id, time, status, and type (where status and type were obtained by calling a subquery with id and time). Is it possible?
Hello,   I'm using Splunk Enterprise 9.1.2 on my local Linux machine (in a docker container). When documenting a new custom command in the searchbnf.conf file in my Splunk app, one of the fields... See more...
Hello,   I'm using Splunk Enterprise 9.1.2 on my local Linux machine (in a docker container). When documenting a new custom command in the searchbnf.conf file in my Splunk app, one of the fields is called "description". However, I can't seem to find where this field is actually used? The command help that pops up when I start typing the command in the search assistant in Splunk Web only shows the "shortdesc" field. This is also true for command from apps installed from SplunkBase. I'm attaching a screenshot that shows what I see what I start typing a command. In this case I'm using the "summary" command from the Splunk ML app. The entry in searchbnf.conf is: [summary-command] syntax = summary <model_name> shortdesc = Returns a summary of a machine learning model that was learned using the "fit" command. description = The summary is algorithm specific. For <LinearRegression>, \ the summary is a list of coefficients. For \ <LogisticRegression>, the summary is a list of \ coefficients for each class. comment1 = Inspect a previously-learned LinearRegression model, "errors_over_time": example1 = | summary errors_over_time usage = public related = fit apply listmodels deletemodel As can be seen, only "shortdesc" shows in the little help window that pops up. Is this normal? Is there something to change in some configuration to make the full description show? Is there a different place I can find the full description for the command? FYI, the "Learn more" link goes to http://localhost:8000/en-US/help?location=search_app.assist.summary which gets automatically redirected to https://docs.splunk.com/Documentation/MLApp/5.4.1/User/Customsearchcommands?ref=hk#summary. However, for my custom command it gets redirected to https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/3rdpartycustomcommands?ref=hk so not relevant... Thanks!
any further input after answering your questions?  
Hello Splunk Community, I'm seeking assistance with ingesting data from Tenable to my Splunk AIO instance. Every time I attempt to configure the TA for Tenable, I encounter HTTPS errors. I suspect... See more...
Hello Splunk Community, I'm seeking assistance with ingesting data from Tenable to my Splunk AIO instance. Every time I attempt to configure the TA for Tenable, I encounter HTTPS errors. I suspect there may be an error in the steps I'm following, and I would greatly appreciate someone guiding me through the process step by step. Additionally, I'm looking for guidance on how to ensure that the dashboards are running properly on the Tenable app within Splunk.
Hi Guys, I am using timeline visualization in my Splunk dashboard to show total elapsed time. But in some times its not good. Is there any examples to show Total elapsed time in different visualizat... See more...
Hi Guys, I am using timeline visualization in my Splunk dashboard to show total elapsed time. But in some times its not good. Is there any examples to show Total elapsed time in different visualization .Is there any best way to show Total Elapsed time in graph in splunk dashboard.   IN my splunk Query: |eventstats min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") IN my dashboard: | sort -Timestamp | eval ElapsedTimeInSecs=ElapsedTimeInSecs*1000 | table Timestamp correlationId Status ElapsedTimeInSecs    
Hi @RanjithaN99, Indexer Cluster is managed by the Cluster Manager so you have to create the new indexes.conf in this server that deploys it to the indexers; for more infos see at https://docs.splun... See more...
Hi @RanjithaN99, Indexer Cluster is managed by the Cluster Manager so you have to create the new indexes.conf in this server that deploys it to the indexers; for more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Aboutclusters in few words, you have to create a stanza in indexes.conf with the new index using the CLI and push the configuration using GUI. For data forwarding from te SH to the Indexers, you should already have this configuration because it's a best practice to send internal logs from all the Splunk servers to Indexers. If not go in [Settings > Forwarding and Receiving > Forwarding] and configure Forwarding. Ciao. Giuseppe
4/2/24 5:57:10.000 AM   02-APR-2024 05:57:10 * (CONNECT_DATA=(SID=cpdb11)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53100)) * e... See more...
4/2/24 5:57:10.000 AM   02-APR-2024 05:57:10 * (CONNECT_DATA=(SID=cpdb11)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53100)) * establish * cpdb11 * 0     4/2/24 5:57:10.000 AM   2024-04-02T05:57:10.270270-04:00     4/2/24 5:57:09.000 AM   02-APR-2024 05:57:09 * service_update * cpdb11 * 0     4/2/24 5:57:09.000 AM   02-APR-2024 05:57:09 * service_update * cpdb11 * 0     4/2/24 5:57:08.000 AM   TNS-12505: TNS:listener does not currently know of SID given in connect descriptor     4/2/24 5:57:08.000 AM   02-APR-2024 05:57:08 * (CONNECT_DATA=(SID=pdb09)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53098)) * establish * pdb09 * 12505     4/2/24 5:57:08.000 AM   TNS-12505: TNS:listener does not currently know of SID given in connect descriptor     4/2/24 5:57:08.000 AM   02-APR-2024 05:57:08 * (CONNECT_DATA=(SID=pdb09)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53096)) * establish * pdb09 * 12505     4/2/24 5:57:08.000 AM   2024-04-02T05:57:08.619205-04:00
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as... See more...
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as RequestID "content.payload{}.GL Import flow processing results{}.impConReqId" as ImpConReqId content.payload{} as response |eval OracleRequestId="RequestID: ".if(isnull(RequestID),0,RequestID)." ImpConReqId: ".if(isnull(ImpConReqId),0,ImpConReqId)|table OracleRequestId response
Please share your sample event in a code block </> not an image of them? Also, what settings do you currently have? I am assuming you are looking to do this at ingest time rather than search time, ... See more...
Please share your sample event in a code block </> not an image of them? Also, what settings do you currently have? I am assuming you are looking to do this at ingest time rather than search time, please clarify?
Hi, I am working in a distributed Splunk environment with one search head and an indexer cluster. I am trying to monitor a path that is on the search head and I created a monitor input from the web... See more...
Hi, I am working in a distributed Splunk environment with one search head and an indexer cluster. I am trying to monitor a path that is on the search head and I created a monitor input from the web GUI. How do I create an index on the indexer cluster and configure forwarding from the search head to the indexer cluster. Please help me. Thanks   
What is the full search?
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as... See more...
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as RequestID "content.payload{}.GL Import flow processing results{}.impConReqId" as ImpConReqId content.payload{} as response |eval OracleRequestId="RequestID: ".if(isnull(RequestID),0,RequestID)." ImpConReqId: ".if(isnull(ImpConReqId),0,ImpConReqId)|table OracleRequestId response