All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I've found the solution. The problem was mine. If I put : "testcsv.csv" -> it doesn't work. But if I remove the ".csv", it works perfectly... Thanks for your reply.
@siemless  This may be best to discuss in the Slack Users but I could not find you so I will respond here. In some cases, I have boxes where the /opt/splunk dir is mounted to a separate drive w/ mo... See more...
@siemless  This may be best to discuss in the Slack Users but I could not find you so I will respond here. In some cases, I have boxes where the /opt/splunk dir is mounted to a separate drive w/ mount point /opt/splunk.  In that case you can just swap the disk, I learned this method from AWS support.  But that takes preplanning. In some cases, I have boxes that are jacked up, either volume issues across multiple disks or  just not setup to swap disks.  In that case you can use the Splunk docs >>> https://docs.splunk.com/Documentation/Splunk/9.2.1/Installation/MigrateaSplunkinstance I argued with Splunk about the documentation steps but they claim the steps are correct, although I still believe confusing.  FWIW this is what I did... 1 >Create a new host with new OS (in my case I  rename /re-IP to the original afterward). 2 > Install the same version of Splunk on new host (I used a .tar), set systemd, set same admin pwd, then stop Splunkd, maybe test a restart and reboot, to verify. 3 > Stop Splunkd on old host, tar up /opt/splunk, copy over the old.tar to new box, untar over the new install, then start Splunkd. That worked for me, and going fwd all new hosts will be configured for the disk-swappable process. Good luck
Hello, Unfortunately, I've used your exact method and the result doesn't work. I do have my line indicating my "url". But nothing in "type" nor in its "count". Maybe I made a mistake by indicatin... See more...
Hello, Unfortunately, I've used your exact method and the result doesn't work. I do have my line indicating my "url". But nothing in "type" nor in its "count". Maybe I made a mistake by indicating the wrong "destination app" when creating the "lookup definition"? What should I put? Thanks Regards
Hello Everyone I'm trying to calculate the "time_difference" between one column and another in Splunk. The problem is that the value from which I substract something is current time and when I use t... See more...
Hello Everyone I'm trying to calculate the "time_difference" between one column and another in Splunk. The problem is that the value from which I substract something is current time and when I use the current time value it is shown in a table as one event (epoch_current_time). Therefore when I substract value "epoch_password_last_modified" from "epoch_current_time" i get no results. Is there a way to make "epoch_current_time" visible each time in each row like "epoch_password_last_modified" value?  
Try something like this | eval check=replace($token$," +","\s+") | where match(ID, check)
Try tostring duration |eventstats min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=r... See more...
Try tostring duration |eventstats min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=tostring(ElapsedTimeInSecs,"duration")
If you didn't want multiple lines. This is how I would accomplish the same thing. | sort -elapseJobTime | stats list(eval(myindex(JOBNAME,0,2))) as JOBNAME list(eval(mvindex(JOBID,0,2))) as JO... See more...
If you didn't want multiple lines. This is how I would accomplish the same thing. | sort -elapseJobTime | stats list(eval(myindex(JOBNAME,0,2))) as JOBNAME list(eval(mvindex(JOBID,0,2))) as JOBID list(eval(myindex(elapseJobTime,0,2))) as elapseJobTime by desc  
Response received form support:  This issue is currently being raised with our internal teams to look into changing the behavior of these checks in a future release of the app to avoid this erro... See more...
Response received form support:  This issue is currently being raised with our internal teams to look into changing the behavior of these checks in a future release of the app to avoid this error from appearing in the logs. Looking at the options available in Splunk regarding the application, there does not seem to be a way to limit the polling frequency, that in turn would reduce the number of errors appearing.
Try filtering the results on the date_hour field. index=* host=* | where date_hour>=18 AND date_hour<21 | eval pctCPU=if(CPU="all",100-pctIdle,Value) | timechart avg(pctCPU) AS avgCPU BY host
Assuming traceid and message have already been extracted form the JSON, try something like this | rex field=message "Error_Request_Response for URI: (?<uri>[^,]+), and Exception Occurred: (?<excepti... See more...
Assuming traceid and message have already been extracted form the JSON, try something like this | rex field=message "Error_Request_Response for URI: (?<uri>[^,]+), and Exception Occurred: (?<exception>[^,]+)," | table traceId uri exception
I have a dataset of user data including the user's LastLogin. The LastLogin field is slightly oddly formatted but very regular in it's pattern. I wish to calculate the number of days since LastLogin.... See more...
I have a dataset of user data including the user's LastLogin. The LastLogin field is slightly oddly formatted but very regular in it's pattern. I wish to calculate the number of days since LastLogin. This should be super simple. What is bizarre is that in a contrived example using makeresults it works perfectly.   | makeresults | eval LastLogin="Mar 20, 2024, 16:40" | eval lastactive=strptime(LastLogin, "%b %d, %Y, %H:%M") | eval dayslastactive=round((now() - lastactive) / 86000, 0)   This yields: But with the actual results the same transformations do not work.    | inputlookup MSOLUsers | where match(onPremisesDistinguishedName, "OU=Users") | where not isnull(LastLogin) | eval LastActive=strptime(LastLogin, "%b %d, %Y, %H:%M") | eval DaysLastActive=round((now() - LastActive) / 86000, 0) | fields Company, Department, DisplayName, LastLogin, LastActive, DaysLastActive   This yields: What am I missing? Cutting and pasting the strings into the makeresults form gives what I would expect.     
Hello, I am looking for my search results for only 6pm to 9pm over the last 90 days. How can I achieve this with the below query? index=* host=* | eval pctCPU=if(CPU="all",100-pctIdle,Value) | tim... See more...
Hello, I am looking for my search results for only 6pm to 9pm over the last 90 days. How can I achieve this with the below query? index=* host=* | eval pctCPU=if(CPU="all",100-pctIdle,Value) | timechart avg(pctCPU) AS avgCPU BY host
The fields are probably multivalue fields for some of your transactions which is why the eval is not working. You should probably start again with your events and work out how to break them up into s... See more...
The fields are probably multivalue fields for some of your transactions which is why the eval is not working. You should probably start again with your events and work out how to break them up into separate parts so that you can create the composite id, in other words, to get where you want to be, don't start from where you are, you need to go back some steps.
Thanks will do. 
The second search (macro) is like: some-search-index $id$ | eval epoch = _time | where epoch < $timestamp$ | sort BY _time | head 1 | fields id, status, type | table id, status, type
That error usually is seen when an indexer stops while processing a search.  What relevant messages do you seen in search.log?  Are there any core files from the indexers? The local SSD provides onl... See more...
That error usually is seen when an indexer stops while processing a search.  What relevant messages do you seen in search.log?  Are there any core files from the indexers? The local SSD provides only 8 days of storage.  Any query that searches data more than 8 days old will have to fetch the data from SmartStore, which will slow processing of the search.  Splunk recommends the local cache be large enough to hold at least 30 days of data.
Below I provided a sample trace where we have message with below format  Error_Request_Response for URI: {}, and Exception Occurred: {} I want to create table with fields URI and Exception Occurr... See more...
Below I provided a sample trace where we have message with below format  Error_Request_Response for URI: {}, and Exception Occurred: {} I want to create table with fields URI and Exception Occurred in a table traceId uri exception aaa bbb ccc Can some guide me how to do it. {"@timestamp":"2024-04-02T11:27:21.745Z","uri":"","level":"ERROR","service":"product-aggregator-models","traceId":"660beb9951d7a6d29b2480e4450d8d82","spanId":"fd29a51914f47434","parentSpanId":"","pid":"1","thread":"reactor-http-epoll-2","class":"c.h.e.p.client.v20.ProductServiceClient","message":"Error_Request_Response for URI: https://www.bbb.com/idvh/rest/Products/2171815?limit=50&offset=0&IsOnExchange=false&includeRates=true&channelWorkflow=Agency&isSamsClubPlan=false, and Exception Occurred: ExceptionStateModel(code=404 NOT_FOUND, description=Requested resource not found, timestamp=2024-04-02T11:27:21.745328361, status=404, path=null, traceId=null, causes=[ExceptionStateModel(code=404 NOT_FOUND, description=Response body is null or Empty, timestamp=2024-04-02T11:27:21.745303692, status=404, path=https://www.bbb.com/idvh/rest/Products/2171815?limit=50&offset=0&IsOnExchange=false&includeRates=true&channelWorkflow=Agency&isSamsClubPlan=false, traceId=null, causes=null)])","stack_trace":"com.bbb.eai.chassis.exceptions.BaseAPIException: null\n\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionObject(BaseWebClient.java:279)\n\tSuppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: \nError has been observed at the following site(s):\n\t*__checkpoint ⇢ 404 NOT_FOUND from GET https://www.bbb.com/idvh/rest/Products/2171815 [DefaultWebClient]\nOriginal Stack Trace:\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionObject(BaseWebClient.java:279)\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.buildExceptionStateModel(BaseWebClient.java:206)\n\t\tat com.bbb.eai.chassis.config.BaseWebClient.lambda$handleExceptions$1(BaseWebClient.java:199)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132)\n\t\tat reactor.core.publisher.Operators$BaseFluxToMonoOperator.completePossiblyEmpty(Operators.java:2071)\n\t\tat reactor.core.publisher.MonoHasElement$HasElementSubscriber.onComplete(MonoHasElement.java:96)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onComplete(MonoNext.java:102)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:260)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:413)\n\t\tat reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:431)\n\t\tat reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:485)\n\t\tat reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:712)\n\t\tat reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:114)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1466)\n\t\tat io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1329)\n\t\tat io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1378)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\t\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\t\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\t\tat io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)\n\t\tat io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:509)\n\t\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:407)\n\t\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)\n\t\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\t\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\t\tat java.base/java.lang.Thread.run(Thread.java:840)\n"}
Hi @abroun, to help you I need also the second search. In few words, you have to correlate results from both the searches using stats BY common key. Ciao. Giuseppe
Hey, I have a problem preparing a Splunjk query. Could you assist me? I have a simple query that returns a table with a few fields:     some-search | fields id, time | table id, time     I a... See more...
Hey, I have a problem preparing a Splunjk query. Could you assist me? I have a simple query that returns a table with a few fields:     some-search | fields id, time | table id, time     I also have a macro with two arguments (id and time) that returns a table with status and type fields. I want to modify the first query somehow to run a subquery for each row by calling my macro and appending the fields to the final table. Finally, I want to have a table with four fields: id, time, status, and type (where status and type were obtained by calling a subquery with id and time). Is it possible?
Hello,   I'm using Splunk Enterprise 9.1.2 on my local Linux machine (in a docker container). When documenting a new custom command in the searchbnf.conf file in my Splunk app, one of the fields... See more...
Hello,   I'm using Splunk Enterprise 9.1.2 on my local Linux machine (in a docker container). When documenting a new custom command in the searchbnf.conf file in my Splunk app, one of the fields is called "description". However, I can't seem to find where this field is actually used? The command help that pops up when I start typing the command in the search assistant in Splunk Web only shows the "shortdesc" field. This is also true for command from apps installed from SplunkBase. I'm attaching a screenshot that shows what I see what I start typing a command. In this case I'm using the "summary" command from the Splunk ML app. The entry in searchbnf.conf is: [summary-command] syntax = summary <model_name> shortdesc = Returns a summary of a machine learning model that was learned using the "fit" command. description = The summary is algorithm specific. For <LinearRegression>, \ the summary is a list of coefficients. For \ <LogisticRegression>, the summary is a list of \ coefficients for each class. comment1 = Inspect a previously-learned LinearRegression model, "errors_over_time": example1 = | summary errors_over_time usage = public related = fit apply listmodels deletemodel As can be seen, only "shortdesc" shows in the little help window that pops up. Is this normal? Is there something to change in some configuration to make the full description show? Is there a different place I can find the full description for the command? FYI, the "Learn more" link goes to http://localhost:8000/en-US/help?location=search_app.assist.summary which gets automatically redirected to https://docs.splunk.com/Documentation/MLApp/5.4.1/User/Customsearchcommands?ref=hk#summary. However, for my custom command it gets redirected to https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/3rdpartycustomcommands?ref=hk so not relevant... Thanks!