All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, My task involves creating a search in datamodel i.e network_traffic, below is the base search how we could convert it to data model search  | tstats summariesonly=t values(All_Traffic.src_i... See more...
Hi, My task involves creating a search in datamodel i.e network_traffic, below is the base search how we could convert it to data model search  | tstats summariesonly=t values(All_Traffic.src_ip) as src_ip, dc(All_Traffic.dest_port) as num_dest_port, values(All_Traffic.dest_port) as dest_port from datamodel=Network_Traffic by All_Traffic.dest_ip | where num_dest_port > 100  | search NOT [| inputlookup  addresses.csv | search (comments =*scanner*) | fields IP AS ALL_Traffic.src_ip | format ] colored in red is not working as expected !!   Thanks..
Hi, I have four indexes with call data. Each index is populated with the data of the corresponding SIP operator, i.e. XML in one index, Key-Value in the second, CSV in the third, and JSON in the la... See more...
Hi, I have four indexes with call data. Each index is populated with the data of the corresponding SIP operator, i.e. XML in one index, Key-Value in the second, CSV in the third, and JSON in the last. I need to get statistics on these calls: who called, how many times and what is the total time of these conversations. That is, as in the attached picture. The question is how to "glue" these statistics together. the main difficulty is that before getting normalized statistics (or a table), I have many transformations for each index. Please help me figure out the best way to do this.
Hello! When I updated my Splunk Universal Forwarder, my data stopped sending data into Splunk. I do not know how to find the upgraded Splunk servers tcpout address I need to update in the Splun... See more...
Hello! When I updated my Splunk Universal Forwarder, my data stopped sending data into Splunk. I do not know how to find the upgraded Splunk servers tcpout address I need to update in the Splunk Forwarder configuration files (use new output server address to edit configuration files in the $SPLUNK_HOME/etc/system/local/ file location). Is there a way to find the new tcpout server address/what address I need to change in my configuration file (after Splunk update) on the Splunks web application in settings?? What I need to find (highlighted in red) server: 1xx.123.12.212:Port (IPAdress.numberUpdate:Port) ***Does the 212 represent the latest Splunk software version (change it to the updated version of Splunk)? Thank you.
I am wondering if it makes sense to send logs from a network device to 2 separate machines that have universal forwarders.  I'm new to my company and am trying to understand why this was implemented ... See more...
I am wondering if it makes sense to send logs from a network device to 2 separate machines that have universal forwarders.  I'm new to my company and am trying to understand why this was implemented and what would be the best practice.  I would assume this was done for redundancy purposes.  Please advise as to what the best practice should be.
Could someone confirm the expected outcome for the following settings:   outputs.conf [tcpout:group1] server = 192.168.10.1, 192.168.10.2 sendCookedData = true blockOnCloning = true # Default bloc... See more...
Could someone confirm the expected outcome for the following settings:   outputs.conf [tcpout:group1] server = 192.168.10.1, 192.168.10.2 sendCookedData = true blockOnCloning = true # Default block on cloning setting [tcpout:group2] server = 192.168.20.1, 192.168.20.2 sendCookedData = true blockOnCloning = false # Alternative blockOnCloning setting inputs.conf [monitor:////var/log/syslog] index=linux sourcetype=syslog _TCP_ROUTING = group1,group2   1) If group1 is unavailable, but group2 is available 2) If group1 is available, but group2 is unavailable 3) If group 1 and group2 are both unavailable I have read the description in the spec file but am not sure how to correctly interpret it: blockOnCloning = <boolean> * Whether or not the TcpOutputProcessor should wait until at least one of the cloned output groups receives events before attempting to send more events. * If set to "true", the TcpOutputProcessor blocks until at least one of the cloned groups receives events. It does not drop events when all the cloned groups are down. * If set to "false", the TcpOutputProcessor drops events when all the cloned groups are down and all queues for the cloned groups are full. When at least one of the cloned groups is up and queues are not full, the events are not dropped. * Default: true My end goal is to configure such that event if one TCPOUT group is unavailable logging should continue to the other unaffected, even if that means the unavailable group will miss receving some events. Thanks  
I'm looking to create a line chart like the attached picture. The data points would be the time a file is received, there are 5 different files, so it would be a multi-line chart. My most recent atte... See more...
I'm looking to create a line chart like the attached picture. The data points would be the time a file is received, there are 5 different files, so it would be a multi-line chart. My most recent attempt was using someone's example of this query. It does work to an extent, but the received time is converted to a decimal which isn't the best for my use case. | eval t=split(strftime(_time, "%H:%M:%S"), ":") | eval h=mvindex(t,0), m=mvindex(t,1), s=mvindex(t,2) | eval v=(h)+(m/100) | bin _time span=1d | chart max(v) over _time by job  
I am writing a shell script which I will execute with a cron job to clean eventdata from files daily.  How do I script Splunk commands in shell script to execute?   Here is what I have: cd /dat... See more...
I am writing a shell script which I will execute with a cron job to clean eventdata from files daily.  How do I script Splunk commands in shell script to execute?   Here is what I have: cd /datadrive/opt/splunk/bin ./splunk clean eventdata -index audit y ./splunk clean eventdata -index _internal y ./splunk clean eventdata -index _introspection y ./splunk clean eventdata -index _metrics y ./splunk clean eventdata -index _telemetry y ./splunk start     Any and all help would be greatly appreciated.
Hi, we are using Splunk from long time but we don't have support account to get help from Splunk like to raise issue with them. The old team who managed Splunk they don't have support account. Kindl... See more...
Hi, we are using Splunk from long time but we don't have support account to get help from Splunk like to raise issue with them. The old team who managed Splunk they don't have support account. Kindly, provide me the steps to get support account for splunk.
I am trying to use Reactjs as a frontend for my app. Is it possible to use splunkjs/mvc/searchmanager in a function based React app using hooks?  Thank you in advance for the help!  @ivfisunov 
Hi, Is there a way to find out all values that UBA can understand for a certain field? e.g. Under Cloud Storge, http://docs.splunk.com/Documentation/UBA/5.2.0/GetDataIn/CIMtoUBAfields#Cloud_Storage... See more...
Hi, Is there a way to find out all values that UBA can understand for a certain field? e.g. Under Cloud Storge, http://docs.splunk.com/Documentation/UBA/5.2.0/GetDataIn/CIMtoUBAfields#Cloud_Storage_category For change_type, example column lists following. Download, Preview, Delete, Create, Edit Could there be others, e.g. Upload? There are other fields where the example set seems very limited. I believe we can add additional values under  /etc/caspida/local/conf/normalize.rules but how do we ensure that UBA does understand those? Thanks,
  Hi all, Kindly help to modify Query on Data Model network traffic , I have built the query index=firewall sourcetype="traffic"  | stats ,values(dest_port) as dest_po... See more...
  Hi all, Kindly help to modify Query on Data Model network traffic , I have built the query index=firewall sourcetype="traffic"  | stats ,values(dest_port) as dest_port,values(dest_ip) as dest_ip, dc(dest_ip) as num_dest_ip, dc(dest_port) as num_dest_port by src_ip | where (num_dest_ip > 350 and num_dest_port > 800) Thanks
I am working on a splunk xml dashboard where the above chart shows about the events that were in wait state, error state, running state, and success state. But we can see that the events which w... See more...
I am working on a splunk xml dashboard where the above chart shows about the events that were in wait state, error state, running state, and success state. But we can see that the events which were in error state, the same jobs ran again and went into success state but we have a requirement in such a way that if the same job ran again for the same day and the job was successful then the event has to be cleared from the dashboard and the new data has to be refreshed. Below is the one snippet which will show the events which were in waiting state and again those events went into successful as the same jobs has ran into success again. below is the sample raw text of the events with the status of the jobs which were in waiting state. {"timestamp": "2023-03-29T04:57:07.366881Z", "level": "INFO", "filename": "splunk_sample_csv.py", "funcName": "main", "lineno": 38, "message": "Dataframe row : {\"_c0\":{\"0\":\"{\",\"1\":\"    \\\"total\\\": 236\",\"2\":\"    \\\"statuses\\\": [\",\"3\":\"        {\",\"4\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"5\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"6\":\"            \\\"count\\\": 0\",\"7\":\"            \\\"name\\\": \\\"BHW_T8841_ANTRAG_RDV\\\"\",\"8\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqvp\\\"\",\"9\":\"        }\",\"10\":\"        {\",\"11\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"12\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"13\":\"            \\\"count\\\": 0\",\"14\":\"            \\\"name\\\": \\\"BHW_T8009_DATEN_EBIS_RDV\\\"\",\"15\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqvi\\\"\",\"16\":\"        }\",\"17\":\"        {\",\"18\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"19\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"20\":\"            \\\"count\\\": 0\",\"21\":\"            \\\"name\\\": \\\"BHW_T5895_AZV_DATEN_RDV\\\"\",\"22\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqvd\\\"\",\"23\":\"        }\",\"24\":\"        {\",\"25\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"26\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"27\":\"            \\\"count\\\": 0\",\"28\":\"            \\\"name\\\": \\\"BHW_T5829_SONDERTILGUNG_RDV\\\"\",\"29\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqv9\\\"\",\"30\":\"        }\",\"31\":\"        {\",\"32\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"33\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"34\":\"            \\\"count\\\": 0\",\"35\":\"            \\\"name\\\": \\\"BHW_T5152_PROLO_ZINSEN_RDV\\\"\",\"36\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqv6\\\"\",\"37\":\"        }\",\"38\":\"        {\",\"39\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"40\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"41\":\"            \\\"count\\\": 0\",\"42\":\"            \\\"name\\\": \\\"BHW_T5149_PROLO_KOND_RDV\\\"\",\"43\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqv1\\\"\",\"44\":\"        }\",\"45\":\"        {\",\"46\":\"            \\\"status\\\": \\\"Wait Condition\\\"\",\"47\":\"            \\\"Timestamp\\\": \\\"2023\\/03\\/22 17:26:40\\\"\",\"48\":\"            \\\"count\\\": 0\",\"49\":\"            \\\"name\\\": \\\"BHW_T5144_ZUT_SALDEN_RDV\\\"\",\"50\":\"            \\\"jobId\\\": \\\"LNDEV02:0dqux\\\"\",\"51\":\"        }\",\"52\":\"        {\",\"53\":\"            \\\" also we have extracted the job names and statuses of the events separately.  The above jobs went into success after running again. Please help us on this.    
IF any one can provide for me meaningful Query - So, I can search for any alerts in our Splunk that does not contains any result for contributing events ,,    Thanks Alot.       
Dear Experts.. Looking for help with a Splunk Query... I was working on a Splunk Query to identify the Frames connection to the HMC.. Im able to find the HMC's the frame is connected.. If a frame i... See more...
Dear Experts.. Looking for help with a Splunk Query... I was working on a Splunk Query to identify the Frames connection to the HMC.. Im able to find the HMC's the frame is connected.. If a frame is connected with 2 hmc the active_hmc field will contain both hmc's separated by "_ " Incase the frame is connected with single HMC.. active_hmc contains only one HMC name.. I would like to create a new field that would contain the actual HMC pair name for each frame.. For the single HMC active frames, I would like to generate the HMC pair data by searching inside the entire table to see if there is a match.. For Example: ============== if the field value active_hmc=hmc50.. The same field also will have some frames connected wirh 2 hmcs like active_hmc=hmc49_hmc50. Would like to find that pairs and create a new field hmc_pair in the table with values hmc_pair=hmc49_hmc50. Could you help me with the query. Splunk query: ================== index=aix_os source=hmc | spath path=hmc_info{} output=LIST | mvexpand LIST | spath input=LIST | where category == "power_frame" | dedup hmc_name frame_name | stats values(hmc_name) as hmc_names dc(hmc_name) as hmc_count by frame_serial, frame_name, datacenter | eval active_hmc=mvjoin(mvsort(hmc_names), "_") | eval hmc_pair=mvjoin(mvsort(hmc_names), "_") | eval hmc_redundancy=if(hmc_count=2, if(match(active_hmc, "^([^_]+)_([^_]+)$") AND mvcount(mvdedup(hmc_names))=2, "OK", "missing"), "NOT-OK") | table active_hmc frame_name, frame_serial,hmc_redundancy, datacenter | sort +hmc_redundancy   Thanks
| eval vm_unit=case(vmSize="Standard_F16s_v2",2,vmSize="Standard_F8s_v2",1,vmSize="Standard_F4s",0.5,vmSize="Standard_F2s_v2",0.25) | bin _time span=1h | stats values(vm_unit) as vm_unit values(loc... See more...
| eval vm_unit=case(vmSize="Standard_F16s_v2",2,vmSize="Standard_F8s_v2",1,vmSize="Standard_F4s",0.5,vmSize="Standard_F2s_v2",0.25) | bin _time span=1h | stats values(vm_unit) as vm_unit values(location) as location by _time id | timechart span=1h usenull=true sum(vm_unit)-case(location="westus2",2245,location="centralus",2146,location="northeurope",624,location="germanywestcentral",620) as vm_count by location | fillnull value=0
1. How to get total sum of call_Duration of time for all call_Name mentioned below in splunk from ms to seconds with below details call_Name=A call_Duration=501 call_Name=B call_Duration=2456 ... See more...
1. How to get total sum of call_Duration of time for all call_Name mentioned below in splunk from ms to seconds with below details call_Name=A call_Duration=501 call_Name=B call_Duration=2456 call_Name=C call_Duration=1115 call_Name=D call_Duration=1598 cal_Name=E call_Duration=1621 And also I have another column (column name is E2E time)  which is total 17.677 seconds  2. Need help in calculating total time difference for total  E2E Time - call_Duration
Here is the raw log      { "markers": { "requestId": "RAWWyBVRjlX1wCr3JPINpZz6TLfa6FAM_09c958c6", "msgId": "5eeaab92-3432-42ac-803e-d30f0b26261e", "assetId": "ur... See more...
Here is the raw log      { "markers": { "requestId": "RAWWyBVRjlX1wCr3JPINpZz6TLfa6FAM_09c958c6", "msgId": "5eeaab92-3432-42ac-803e-d30f0b26261e", "assetId": "urn:aaid:sc:VA6C2:412d8e58-0180-45a8-80b7-fe415604a91a", "path": "/content/assets/736d7625-1ac4-42f9-b461-ddc4f313297b", "processor": "ColorExtractionEventProcessor" }, "timestamp": "2023-04-11 23:00:45.151", "level": "ERROR", "thread": "DefaultDispatcher-worker-14", "msg": "failure processing event", "exception": "org.springframework.web.reactive.function.client.WebClientResponseException$Forbidden: 403 Forbidden from GET https://anonymizeddomain/rendition/id/urn:aaid:sc:VA6C2:412d8e58-0180-45a8-80b7-fe415604a91a;size=250?fragment=id%3D1f7c51ce-7815-4e1a-92bd-37c274b6f097\n\tat org.springframework.web.reactive.function.client.WebClientResponseException.create(WebClientResponseException.java:200)\n\tSuppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: \nError has been observed at the following site(s):\n\t*__checkpoint ⇢ 403 from GET https://anonymizeddomain/rendition/id/urn:aaid:sc:VA6C2:412d8e58-0180-45a8-80b7-fe415604a91a;size=250?fragment=id%3D1f7c51ce-7815-4e1a-92bd-37c274b6f097 [DefaultWebClient]\nOriginal Stack Trace:\n\t\tat org.springframework.web.reactive.function.client.WebClientResponseException.create(WebClientResponseException.java:200)\n\t\tat org.springframework.web.reactive.function.client.DefaultClientResponse.lambda$createException$1(DefaultClientResponse.java:207)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:106)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.lambda$onNext$1(TracingSubscriber.java:62)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onNext(TracingSubscriber.java:62)\n\t\tat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.lambda$onNext$1(TracingSubscriber.java:62)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onNext(TracingSubscriber.java:62)\n\t\tat reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onNext(FluxDefaultIfEmpty.java:101)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.lambda$onNext$1(TracingSubscriber.java:62)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onNext(TracingSubscriber.java:62)\n\t\tat reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:137)\n\t\tat reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:127)\n\t\tat reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.lambda$onNext$1(TracingSubscriber.java:62)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onNext(TracingSubscriber.java:62)\n\t\tat reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:137)\n\t\tat reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:127)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.lambda$onNext$1(TracingSubscriber.java:62)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onNext(TracingSubscriber.java:62)\n\t\tat reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:137)\n\t\tat reactor.core.publisher.FluxFilterFuseable$FilterFuseableSubscriber.onNext(FluxFilterFuseable.java:118)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.lambda$onNext$1(TracingSubscriber.java:62)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onNext(TracingSubscriber.java:62)\n\t\tat reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:137)\n\t\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)\n\t\tat reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onComplete(TracingSubscriber.java:72)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:142)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onComplete(TracingSubscriber.java:72)\n\t\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:260)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.withActiveSpan(TracingSubscriber.java:83)\n\t\tat io.opentelemetry.javaagent.shaded.instrumentation.reactor.TracingSubscriber.onComplete(TracingSubscriber.java:72)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:142)\n\t\tat reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)\n\t\tat reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)\n\t\tat reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)\n\t\tat reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:703)\n\t\tat reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\t\tat io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\t\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)\n\t\tat io.opentelemetry.javaagent.instrumentation.netty.v4_1.client.HttpClientResponseTracingHandler.channelRead(HttpClientResponseTracingHandler.java:29)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299)\n\t\tat io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\t\tat io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)\n\t\tat io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)\n\t\tat io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449)\n\t\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\t\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\t\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\t\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\t\tat io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)\n\t\tat io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:487)\n\t\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:385)\n\t\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)\n\t\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\t\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\t\tat java.base/java.lang.Thread.run(Thread.java:834)\n", "args": { "reason": "403 Forbidden from GET https://anonymizeddomain/rendition/id/urn:aaid:sc:VA6C2:412d8e58-0180-45a8-80b7-fe415604a91a;size=250?fragment=id%3D1f7c51ce-7815-4e1a-92bd-37c274b6f097" } }       And here is my query      index=projectm-dev-ue1 sourcetype=content-sync msg="failure processing ACP event" "403 forbidden" | table args.reason      This returns 6 entries with args.reason as blank. Why? This same query works for other log:     { ... "args": { "reason": "null cannot be cast to non-null type kotlin.collections.List<kotlin.collections.Map<kotlin.String, kotlin.Any>>" } }      Why?
Given the simple scenario: I have users in a platform that have actions, I want to return all the users that haven't performed a specific action. For example, I want to return all users that have b... See more...
Given the simple scenario: I have users in a platform that have actions, I want to return all the users that haven't performed a specific action. For example, I want to return all users that have been doing the action "View", but haven't performed any action "Purchase". The index would be the same. Pseudo query: index=A action=view [ search index=A action=purchase | stats count by user ] | where count by user = 0  Any idea how could I write such query?
Hi,   I would like to know if someone can help me with this issue.  I am trying to add a time constraint to an SPL and I have this so far....   index=... host=... source=... sourcetype=...... See more...
Hi,   I would like to know if someone can help me with this issue.  I am trying to add a time constraint to an SPL and I have this so far....   index=... host=... source=... sourcetype=... ... | eval Range = "-1@w" | where TEMPDATE >= (relative_time(now(),Range))     That is exactly 1 week behind meaning... If the SPL runs on current week's Monday, then the data will be from that same Monday. If the SPL runs on Tuesday, then the data will be from Monday and Tuesday of the current week. and so on... I need to be able to change it to as follow: If the SPL runs on Monday (current week), then the data returned must be from the previous week Monday through Saturday. If the SPL runs the rest of the week (Tuesday - Sunday), then the data must still be from the previous week through Saturday. If the end of the month ends in the middle of a week, I'd like to have a month cut off and only run up until that day.   For example:  The Monday the SPL runs, and the previous week ends half way through the week, say, on a Wednesday, only get the data from Monday to Wednesday. If the SPL runs on Monday June 5th, in this case, then get only get data from the previous week May 29, 30 and 31.  If the SPL runs on Tuesday - Saturday, same as above, only get the data from Monday 29 through Wednesday 31st still. I have so far: earliest = "-2@w" latest=@w1 Thank you for any guidance. I am not sure how the earliest and latest works. Diana      
I'm attempting to find file downloads within a 2 minute timespan following a browser being spawned from outlook (my subsearch). Everything works find (the search andsubsearch) until I add the regex c... See more...
I'm attempting to find file downloads within a 2 minute timespan following a browser being spawned from outlook (my subsearch). Everything works find (the search andsubsearch) until I add the regex command limiting the filepath to the downloads folder.  I'm getting the error "Error in 'SearchOperator:regex': Usage: regex <field> (=|!=) <regex>." Can anyone help me understand why the regex command is throwing it off? I think it's because it's taking the subsearch as part of the regex syntax but I don't know how to separate the two.  Search: index=random_index event_simpleName=*FileWritten | regex TargetFileName="^[\WD]\w*\S*\W(?:Users)\W\w+\.\w+\W(?:Downloads)\W\w+" [search index=random_index* sourcetype=stuff event_simpleName=ProcessRollup* ParentBaseFileName=OUTLOOK.EXE ImageFileName IN (*firefox* *chrome* *edge*) CommandLine IN (*sharepoint.com*) NOT CommandLine IN (*vendor*) | rename _time AS earliest | eval latest=relative_time(_time,"+5min@min") | table aid earliest latest | format] | table _time aid TargetFileName