All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, We had an index that stopped receiving logs.  Since we do not manage the host sending the logs I wanted to get more information before reaching out.  The one interesting error that showed up ... See more...
Hello, We had an index that stopped receiving logs.  Since we do not manage the host sending the logs I wanted to get more information before reaching out.  The one interesting error that showed up right about the time the logs stopped was the following.  I have not been able to find anything useful about this type of error.  Also the error is being thrown from the indexer. Unable to read from raw size file="/mnt/splunkdb/<indexname>/db/hot_v1_57/.rawSize": Looks like the file is invalid. thanks for any assistance I can get.
Heavy Forwarder issues   on a 9.02 version0 and cant connect to indexer after an upgrade from 8.2.0 Anyone know of more current discussion other than this 2015 post: https://community.splunk.com/t5... See more...
Heavy Forwarder issues   on a 9.02 version0 and cant connect to indexer after an upgrade from 8.2.0 Anyone know of more current discussion other than this 2015 post: https://community.splunk.com/t5/Getting-Data-In/Why-am-I-getting-error-quot-Unexpected-character-while-looking/m-p/250699 Error httpclient request [6244 indexerpipe] - caught exception while parsing http reply: enexpected character while looking for value : '<' Error S25OverHttpOutputProcessor [6244 indexerpipe] - http 502 bad gateway
Hey Can someone help me with getting the profiling metrics like cpu and ram used by the app to show up in the splunk observation portal , I can get the app metrics so i have used a simple curl java a... See more...
Hey Can someone help me with getting the profiling metrics like cpu and ram used by the app to show up in the splunk observation portal , I can get the app metrics so i have used a simple curl java app which curls google every second and this shows up in apm metrics I have done all the configs to have the profiling enabled as per all teh docs in splunk but nothing shows up in the profiling section . is it because i am using the free trial ?  I am trying this on simple ec2 instance instrumenting the app using java jar command and I have been exporting the necessary vars and have added the required java options while instrumenting the app using splunk-otel-agent-collector.jar but nothing shows up please help.
Hello, Can someone help me with a search to find out whether any changes has been made to the splunk reports(ex:paloalto report) in last 30 days.   thanks
Hi @allidoiswinboom , if you haven't any intermediate HF, you must locate them on Indexers. Ciao. Giuseppe
Hi Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I ... See more...
Hi Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I want to pass these tokens only to one specific row, while for others, I want to reject them.  For the rows where i need to pass the tokens, I've used the following syntax: <row depends="$abc$ $def$"></row> For the row where i don't want to use the token, I've used the following syntax; <row rejects="$abc$ $def$"></row>. However when i use the rejects condition, the rows are hidden. I want these rows to still be visible. Could someone please advise on how to resolve this issue?  I would appreciate every help. Thank you in advance!
Groan.
Hi @gcusello Thanks for the reply. We are using UFs and I have the confs files on the deployment server and the indexers. We use a CM to manage all the Indexers so I deploy the updated files from the... See more...
Hi @gcusello Thanks for the reply. We are using UFs and I have the confs files on the deployment server and the indexers. We use a CM to manage all the Indexers so I deploy the updated files from the CM to ensure consistent hashing across the files. Thank you! 
As to the missing results - sure, because your TOTAL field appears empty.  You should just debug that for a start.  All the extra conditional logic you want can be implemented later once you get this... See more...
As to the missing results - sure, because your TOTAL field appears empty.  You should just debug that for a start.  All the extra conditional logic you want can be implemented later once you get this core piece working.  Here's how I'd approach it: Temporarily comment out (triple backticks before and after them) or remove all the fieldformats and the trailing table command so you can see what values all fields actually have.  When you put them back in, I suggest doing those "pretty it up" tasks as one of the last steps after all actual "work" has been done.  Also this makes it easier to follow the code because it'll be structured better - first get your data, then do your calculations, lastly make things pretty. Then just backtrack.  Divide and conquer.  Remove all the stuff after the stats command where TOTAL is calculated.  If there's no result for TOTAL, figure out why.  Since TOTAL is the sum of time_difference, take out everything from the stats onward and see what time_difference is in the events.  If it's blank, then work backwards one more step and see where it comes from - incident review time and notable time - so what are the values for *those* fields?  At some point you'll see what I'm sure is a facepalm somewhere in there. Once you have all that straightened out, add back in the extra stuff one step at a time, confirming the results at each step.  You'll have a lot better understanding of the data you are working with and also how all this works, too. THEN. There's likely to be no reason at all to separately do only "medium" severity.  I suspect if you remove the "where" way up near the top, then do all your stats "by severity" you may be able to just calculate the answers for all severities in one pass.  But again, baby steps. Get it working first, then we can modify it to do that.  
@Teamdrop  Could you be more specific about what you are looking for in Splunk?
Thanks for the reply, the customer is seeing fluctuations from at its peak was 4TB to now around 2.8 TB. I am in a prod environment so can not restart as there would to be too much emailing and autho... See more...
Thanks for the reply, the customer is seeing fluctuations from at its peak was 4TB to now around 2.8 TB. I am in a prod environment so can not restart as there would to be too much emailing and authorising to comply with. What would be a good way to investigate this/ some graphs to indicate if there has been a decrease in events/stayed the same, or if there has been a decrease in the thruput (would this be relevant as I'd need to know the volume of data just before it's indexed and counted to licesning meter PER INDEX)
Fluctuations in ingest are normal.  If what you're seeing appears abnormal, then there are a few things to check. 1) Verify the UF and SC4S are still running. 2) Restart the UF and/or SC4S 3) Conf... See more...
Fluctuations in ingest are normal.  If what you're seeing appears abnormal, then there are a few things to check. 1) Verify the UF and SC4S are still running. 2) Restart the UF and/or SC4S 3) Confirm the applications generating the data are still running. 4) Check for any network changes that may be blocking ingestion. 5) Check the UF and SC4S logs to see if they're reporting any problems sending data. 6) Confirm the certificates used (if any) have not expired. The data used by the CMC to show ingestion rates is retained for only 30 days by default. That is why you cannot view the rates for November.
The eventstats syntax is incorrect.  Try this index=iis status=404 uri="*/*.*" |stats count by host uri |eventstats max(count) as highcount by host |sort -highcount -count |table highcount count hos... See more...
The eventstats syntax is incorrect.  Try this index=iis status=404 uri="*/*.*" |stats count by host uri |eventstats max(count) as highcount by host |sort -highcount -count |table highcount count host uri  
I have a relatively simple query that counts HTTP 404 events in IIS logs. I wanted to sort them according to which hosts had the highest individual count, however the "highcount" field is always blan... See more...
I have a relatively simple query that counts HTTP 404 events in IIS logs. I wanted to sort them according to which hosts had the highest individual count, however the "highcount" field is always blank. (I probably need to also sort by host, but that's irrelevant to the eventstats issue.)   index=iis status=404 uri="*/*.*" |stats count by host uri |eventstats max(count) by host as highcount |sort -highcount -count |table highcount count host uri    
This answer came to me from support:   You can not select specific lines only to be published to log analytics as analytics agent will read the entire file configured under log analytics source ru... See more...
This answer came to me from support:   You can not select specific lines only to be published to log analytics as analytics agent will read the entire file configured under log analytics source rule with all its contents and send those to ES as log events.  So its totally based on how many log files you are monitoring and how big they are that will decide how much disk space log analytics data will consume.; this can not be controller manually. Now once data is in ES, you may use regex and grok patterns for field extraction, however each raw message lines which analytics agent reads will be published to the events service as mentioned in point 1. Further in order to see only certain data, you can make use of ADQL filters and print only the interested ones. As all the data which was present in your log files is now with ES, those are stored in various shards based on their timestamp and spacing. How much data ES will keep is dependent on your retention period. So if your analytics retention is (say) 90 days( which is per your license units and the retention configuration that you have set in controller), all the data will be stored for at least 90 days and when those indices expire, they will be automatically deleted form backend.  However if that entire data is too much as your ES is right now just a single node and doe snot have enough resources to store all this huge amount of data that you are sending, you may choose to delete older data and keep data for lesser duration, like only 30 days or only 10 days or 8 days which is the minimum retention period. This does not mean you delete all the log analytics since 90 days data or keep all, rather its more like you can choose lesser retention for data to be stored in ES and delete data which is old so they don't occupy space unnecessarily. Now Regarding "Having so many resources allocated just for extracting errors from logs does not seem like the right way to me."    None of the suggested recommendations was to fetch only ERROR data from logs, as it is clearly mentioned that this can not be done per the product design. The recommendations however were for how in this scenario when we can't control what comes to ES from your log files, can we still manage your data and space nicely so that you get the useful data and discard extra data to have not to worry about using more disk space on this host.   Regarding "Alternatively, could you recommend me how to select only errors from the log files?". This is already answered in point 1.
i'm having a thought , that a text input box which has a dropdown and also able to enter text input with a single token, i am not sure this will works , needed guidance Thanks in Advance. Sanja... See more...
i'm having a thought , that a text input box which has a dropdown and also able to enter text input with a single token, i am not sure this will works , needed guidance Thanks in Advance. Sanjai S
Dear Splunkers, I would like to ask your feedback on following issues with the Service Now add-on app. The problem is that I´m not able display settings for the Add-on where I need to select differ... See more...
Dear Splunkers, I would like to ask your feedback on following issues with the Service Now add-on app. The problem is that I´m not able display settings for the Add-on where I need to select different Service now accounts configured succefully.   What it should look like is following (taken from different project): This is where I can choose my preferred account and configure details.  Can you suggest what could be the reason not to see these settings with my admin user role account ? Thanks in advance. BR
Hi @Symon , don't use join because it's a very slow search, use stats in this way: index=fortigate sourcetype IN (fgt_event, fgt_traffic ) | eval src=coalesce(src,assignip) | stats values(srcpo... See more...
Hi @Symon , don't use join because it's a very slow search, use stats in this way: index=fortigate sourcetype IN (fgt_event, fgt_traffic ) | eval src=coalesce(src,assignip) | stats values(srcport) AS srcport values(dest) AS dest values(destport) AS destport BY user src If you want only the events present in both the sourcetypes, you have to add an additional condition: index=fortigate sourcetype IN (fgt_event, fgt_traffic ) | eval src=coalesce(src,assignip) | stats values(srcport) AS srcport values(dest) AS dest values(destport) AS destport dc(sourcetype) AS sourcetype_count BY user src | where sourcetype_count=2 | fields - sourcetype_count Ciao. Giuseppe
I did not get results..  I have to calculate average time for closing alerts by severity, so in this case I am calculting "medium" severity alerts, so the equation should be total medium alerts / ... See more...
I did not get results..  I have to calculate average time for closing alerts by severity, so in this case I am calculting "medium" severity alerts, so the equation should be total medium alerts / total time of "closing" medium alerts  
Hello, I have started a Cloud Trial to create a test environment for a connector that I wanted to test for a customer. This connector requires additional ports to be opened to allow data ingestion f... See more...
Hello, I have started a Cloud Trial to create a test environment for a connector that I wanted to test for a customer. This connector requires additional ports to be opened to allow data ingestion from Azure Event Hub. This should be configures using the ACS API. I've enabled the token authentication from the portal, and I generated a new token. I then try to configure Postman to use the API, and I've setup a new request to test the API access:    https://admin.splunk.com/{{stack}}/adminconfig/v2/status   Where {{stack}} represents my instance name defined at the collection level, and bearer token configured in the authorization tab. However, when executing the request, it loops for approximately 30 seconds to a minute before resulting in the following error message:     { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=426a14b3-97e3-968a-a924-f3abc4300795). Please refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." }   Despite my efforts, this error has persisted for over 24 hours, and I have no ideas about what can be the issue root cause. Could anyone advice on how to address this issue and successfully configure the necessary settings? Any assistance would be greatly appreciated. Thank you.