All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So in that case we don't want to use reject condition in the row right? @ITWhisperer  Could you please share sample query for reference?
Just use one token - if necessary, use the <change> construct in your filters to set/unset a third token GHI $ghi$ based on whether $abc$ and/or $def$ are set
Hi @phanTom  thanks for the suggestions, they help!  
The counts were calculated via   index=index_1 (sourcetype=source_1 field_D="Text" field_E=*Down* OR field_E=*Up*) OR (sourcetype=source_2) | dedup field_B keepempty=true | eval field_AB=coalesce(f... See more...
The counts were calculated via   index=index_1 (sourcetype=source_1 field_D="Text" field_E=*Down* OR field_E=*Up*) OR (sourcetype=source_2) | dedup field_B keepempty=true | eval field_AB=coalesce(field_A, field_B) | where isnotnull(field_AB) | stats dc(field_AB) as count by sourcetype   To your point about source vs. sourcetype, I realized that after looking at those counts and made both of my filters use sourcetype. As for my Initial Query Results, let me clarify that for you: UniqueID is not the same value in every row. It can occur more than once, yes, as this log is reporting the status of the device as it goes up and down. As for field_C, to clarify from my initial post: field_C is an identifier which can be mapped to multiple field_AB.  field_C is the only field from source_2 I really want to add to source_1. Essentially I'm using source_2 as a reference to pull field_C from, matching all instances of UniqueID_X to whichever corresponding field_C. Here is what it looks like with that in mind: Inital Query Results field_D field_AB field_C field_E DeviceType UniqueID_1   Up DeviceType UniqueID_2   Down   UniqueID_3 Data_2     UniqueID_4 Data_1   As well as expected results: field_D field_AB field_C field_E DeviceType UniqueID_1 Data_1 Up DeviceType UniqueID_2 Data_2 Down DeviceType UniqueID_3 Data_2 Down DeviceType UniqueID_4 Data_1 Down Additionally, here is raw data from source_1:   2024/03/07 09:06:12.75 ET,ND="\A001 ",DIP="$ABC0.",Sys=B002,Serv=C,Ent="AAA-BBBBBB ",AppID=0000,LogID=1111,field_A="AA000111 ",Alias=" ",Tkn1="222222222 ",Tkn2="333333333 ",field_D="DeviceType",field_E="BB AA000111 (00) Down, error 0000"   And here is raw data from source_2 (field_B and field_C are towards the bottom):   BSYS= ,EXL= ,HI= ,HSTT= ,MBR= ,NAME= ,ORRR= ,RDDD= ,REV= ,LINK=0000,POSTDATE=240307,RESP= ,RESP_ERR= ,R= ,D= ,DE= ,RECOM= ,SCORE= ,STAND= ,ROUTE=00000000,NUM=0000000000,NUM1=0000000000,NUM2=0000000000,NUM3=000000000,CODE=1,POS=P,RNDTTT= ,AUTH=000000,ASYS=0-,AN=A000 ,RCODE= ,TIMEIN=24030709061224,BURNRES= ,BURNRT=00000,BURNS=00000,ACCEPTCODE=0000,ACCEPTNAME=00000000000000 ,ANOTHERID=AAA ,INPUT_CAP=0,INPUTMODE=0,ST=00,STATE=AA,LEVEL= ,COMREQ=0,PDATE=240308,CCODE=000,RTD= ,CCC= ,CCC_RES= ,CCC_RES2= ,SCODE= ,DACTUIB= ,DDD= ,DDDR= ,DDDE= ,DDDEB= ,DDDN= ,DDDT= ,DDDF= ,DDDM= ,DDDM2= ,DDMN= ,DDOI= ,DDRN= ,DDRC= ,DDAF= ,DDTA= ,DDTAF= ,DDTT= ,EI= ,EARD= ,EERD= ,EEFT= ,ESSS= ,EAS= ,EFRT= ,EH=AAA0AAA ,EII=AAAAAAA0,EO=AAAAA00B,FMS= ,FPPP= ,FAT=00,GRC=00,HS=00,HII=00000000,ITN=AAA03,ISS=0000,JT=240307090612,LAA=0000000000,LAA2=0000000000,MSI= ,MIDD=AAAAA AAAAAAAAAAA,MT=0000,MMMM= ,MMOD= ,MMOS= ,MMU= ,MD= ,MDDD= ,MRRC= ,MRRRR= ,MSGGG=0,MSGGG=0000,MCS= ,MCQ= ,NCF= ,NPD=240308,NII=AAA,NPPC= ,NNNPPP= ,NNNPPPR= ,NTNT= ,NTNTR= ,OSS=00,OOM= ,PADD= ,POOOS=000,PDCCC=000000000A,PSC12=000000 ,PSCSC=11111111,PTTTID= ,QTIME=11111,RAII=111111111,REIII=77775750 ,RTDDD= ,RTRRR= ,RTRRRS= ,RIII=BBBBBBB ,RSSST=AA ,RSAAAA=AAAA B CCCCCCCCC DD EE,RRRC=00,RTTTTT=00000,RESSSCC=A,RSSS=0,RSCC=A,EFFN=00000000,RCCCC=00,RMRRM= ,RRTD= ,PREPRE=000000 ,SSTS=00,SNNN=000000,Ssrc=,STMF=00000000,field_B=AA000111,SCCA=00000000,SCCA=00000000,STQQ=000000,SYSTEMTRACE=00000000000,TIST=00000,field_C=AA00 ,TOID=00000000,TST=OK,TT=00,TIMEZONE=*,ACCT=00,TDA= ,TDARID= ,TCC=00,TNID=000000,TTTDM=240307080559,TTTIII=0000000000000000,VRES=   Hope that clears it up.
Hello @gcusello  Yes these files are already on the indexers.
Hello, We had an index that stopped receiving logs.  Since we do not manage the host sending the logs I wanted to get more information before reaching out.  The one interesting error that showed up ... See more...
Hello, We had an index that stopped receiving logs.  Since we do not manage the host sending the logs I wanted to get more information before reaching out.  The one interesting error that showed up right about the time the logs stopped was the following.  I have not been able to find anything useful about this type of error.  Also the error is being thrown from the indexer. Unable to read from raw size file="/mnt/splunkdb/<indexname>/db/hot_v1_57/.rawSize": Looks like the file is invalid. thanks for any assistance I can get.
Heavy Forwarder issues   on a 9.02 version0 and cant connect to indexer after an upgrade from 8.2.0 Anyone know of more current discussion other than this 2015 post: https://community.splunk.com/t5... See more...
Heavy Forwarder issues   on a 9.02 version0 and cant connect to indexer after an upgrade from 8.2.0 Anyone know of more current discussion other than this 2015 post: https://community.splunk.com/t5/Getting-Data-In/Why-am-I-getting-error-quot-Unexpected-character-while-looking/m-p/250699 Error httpclient request [6244 indexerpipe] - caught exception while parsing http reply: enexpected character while looking for value : '<' Error S25OverHttpOutputProcessor [6244 indexerpipe] - http 502 bad gateway
Hey Can someone help me with getting the profiling metrics like cpu and ram used by the app to show up in the splunk observation portal , I can get the app metrics so i have used a simple curl java a... See more...
Hey Can someone help me with getting the profiling metrics like cpu and ram used by the app to show up in the splunk observation portal , I can get the app metrics so i have used a simple curl java app which curls google every second and this shows up in apm metrics I have done all the configs to have the profiling enabled as per all teh docs in splunk but nothing shows up in the profiling section . is it because i am using the free trial ?  I am trying this on simple ec2 instance instrumenting the app using java jar command and I have been exporting the necessary vars and have added the required java options while instrumenting the app using splunk-otel-agent-collector.jar but nothing shows up please help.
Hello, Can someone help me with a search to find out whether any changes has been made to the splunk reports(ex:paloalto report) in last 30 days.   thanks
Hi @allidoiswinboom , if you haven't any intermediate HF, you must locate them on Indexers. Ciao. Giuseppe
Hi Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I ... See more...
Hi Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I want to pass these tokens only to one specific row, while for others, I want to reject them.  For the rows where i need to pass the tokens, I've used the following syntax: <row depends="$abc$ $def$"></row> For the row where i don't want to use the token, I've used the following syntax; <row rejects="$abc$ $def$"></row>. However when i use the rejects condition, the rows are hidden. I want these rows to still be visible. Could someone please advise on how to resolve this issue?  I would appreciate every help. Thank you in advance!
Groan.
Hi @gcusello Thanks for the reply. We are using UFs and I have the confs files on the deployment server and the indexers. We use a CM to manage all the Indexers so I deploy the updated files from the... See more...
Hi @gcusello Thanks for the reply. We are using UFs and I have the confs files on the deployment server and the indexers. We use a CM to manage all the Indexers so I deploy the updated files from the CM to ensure consistent hashing across the files. Thank you! 
As to the missing results - sure, because your TOTAL field appears empty.  You should just debug that for a start.  All the extra conditional logic you want can be implemented later once you get this... See more...
As to the missing results - sure, because your TOTAL field appears empty.  You should just debug that for a start.  All the extra conditional logic you want can be implemented later once you get this core piece working.  Here's how I'd approach it: Temporarily comment out (triple backticks before and after them) or remove all the fieldformats and the trailing table command so you can see what values all fields actually have.  When you put them back in, I suggest doing those "pretty it up" tasks as one of the last steps after all actual "work" has been done.  Also this makes it easier to follow the code because it'll be structured better - first get your data, then do your calculations, lastly make things pretty. Then just backtrack.  Divide and conquer.  Remove all the stuff after the stats command where TOTAL is calculated.  If there's no result for TOTAL, figure out why.  Since TOTAL is the sum of time_difference, take out everything from the stats onward and see what time_difference is in the events.  If it's blank, then work backwards one more step and see where it comes from - incident review time and notable time - so what are the values for *those* fields?  At some point you'll see what I'm sure is a facepalm somewhere in there. Once you have all that straightened out, add back in the extra stuff one step at a time, confirming the results at each step.  You'll have a lot better understanding of the data you are working with and also how all this works, too. THEN. There's likely to be no reason at all to separately do only "medium" severity.  I suspect if you remove the "where" way up near the top, then do all your stats "by severity" you may be able to just calculate the answers for all severities in one pass.  But again, baby steps. Get it working first, then we can modify it to do that.  
@Teamdrop  Could you be more specific about what you are looking for in Splunk?
Thanks for the reply, the customer is seeing fluctuations from at its peak was 4TB to now around 2.8 TB. I am in a prod environment so can not restart as there would to be too much emailing and autho... See more...
Thanks for the reply, the customer is seeing fluctuations from at its peak was 4TB to now around 2.8 TB. I am in a prod environment so can not restart as there would to be too much emailing and authorising to comply with. What would be a good way to investigate this/ some graphs to indicate if there has been a decrease in events/stayed the same, or if there has been a decrease in the thruput (would this be relevant as I'd need to know the volume of data just before it's indexed and counted to licesning meter PER INDEX)
Fluctuations in ingest are normal.  If what you're seeing appears abnormal, then there are a few things to check. 1) Verify the UF and SC4S are still running. 2) Restart the UF and/or SC4S 3) Conf... See more...
Fluctuations in ingest are normal.  If what you're seeing appears abnormal, then there are a few things to check. 1) Verify the UF and SC4S are still running. 2) Restart the UF and/or SC4S 3) Confirm the applications generating the data are still running. 4) Check for any network changes that may be blocking ingestion. 5) Check the UF and SC4S logs to see if they're reporting any problems sending data. 6) Confirm the certificates used (if any) have not expired. The data used by the CMC to show ingestion rates is retained for only 30 days by default. That is why you cannot view the rates for November.
The eventstats syntax is incorrect.  Try this index=iis status=404 uri="*/*.*" |stats count by host uri |eventstats max(count) as highcount by host |sort -highcount -count |table highcount count hos... See more...
The eventstats syntax is incorrect.  Try this index=iis status=404 uri="*/*.*" |stats count by host uri |eventstats max(count) as highcount by host |sort -highcount -count |table highcount count host uri  
I have a relatively simple query that counts HTTP 404 events in IIS logs. I wanted to sort them according to which hosts had the highest individual count, however the "highcount" field is always blan... See more...
I have a relatively simple query that counts HTTP 404 events in IIS logs. I wanted to sort them according to which hosts had the highest individual count, however the "highcount" field is always blank. (I probably need to also sort by host, but that's irrelevant to the eventstats issue.)   index=iis status=404 uri="*/*.*" |stats count by host uri |eventstats max(count) by host as highcount |sort -highcount -count |table highcount count host uri    
This answer came to me from support:   You can not select specific lines only to be published to log analytics as analytics agent will read the entire file configured under log analytics source ru... See more...
This answer came to me from support:   You can not select specific lines only to be published to log analytics as analytics agent will read the entire file configured under log analytics source rule with all its contents and send those to ES as log events.  So its totally based on how many log files you are monitoring and how big they are that will decide how much disk space log analytics data will consume.; this can not be controller manually. Now once data is in ES, you may use regex and grok patterns for field extraction, however each raw message lines which analytics agent reads will be published to the events service as mentioned in point 1. Further in order to see only certain data, you can make use of ADQL filters and print only the interested ones. As all the data which was present in your log files is now with ES, those are stored in various shards based on their timestamp and spacing. How much data ES will keep is dependent on your retention period. So if your analytics retention is (say) 90 days( which is per your license units and the retention configuration that you have set in controller), all the data will be stored for at least 90 days and when those indices expire, they will be automatically deleted form backend.  However if that entire data is too much as your ES is right now just a single node and doe snot have enough resources to store all this huge amount of data that you are sending, you may choose to delete older data and keep data for lesser duration, like only 30 days or only 10 days or 8 days which is the minimum retention period. This does not mean you delete all the log analytics since 90 days data or keep all, rather its more like you can choose lesser retention for data to be stored in ES and delete data which is old so they don't occupy space unnecessarily. Now Regarding "Having so many resources allocated just for extracting errors from logs does not seem like the right way to me."    None of the suggested recommendations was to fetch only ERROR data from logs, as it is clearly mentioned that this can not be done per the product design. The recommendations however were for how in this scenario when we can't control what comes to ES from your log files, can we still manage your data and space nicely so that you get the useful data and discard extra data to have not to worry about using more disk space on this host.   Regarding "Alternatively, could you recommend me how to select only errors from the log files?". This is already answered in point 1.