All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this | tstats latest(_time) as _time BY index sourcetype However, Splunk is not good at find things which aren't there, so if your sourcetype has had no events within your time p... See more...
Try something like this | tstats latest(_time) as _time BY index sourcetype However, Splunk is not good at find things which aren't there, so if your sourcetype has had no events within your time period, it will not show up.
I am not sure what you are asking for - you seem to be calculating Pearson's coefficient with TotalCount being one of the variables. Is this always the case? In any event, have you considered using m... See more...
I am not sure what you are asking for - you seem to be calculating Pearson's coefficient with TotalCount being one of the variables. Is this always the case? In any event, have you considered using macros to speed up writing the calculations out?
Hi Team, I am trying to create a search which show me the list of all sourcetype and index which are not in use or let's say zero/less events from last few days. Can you please advise. Thank... See more...
Hi Team, I am trying to create a search which show me the list of all sourcetype and index which are not in use or let's say zero/less events from last few days. Can you please advise. Thanks   
Hi   It was due to the user being configured to run the Splunk forwarder Windows service. It was a local user account without the necessary rights. I changed it to a local system account and the eve... See more...
Hi   It was due to the user being configured to run the Splunk forwarder Windows service. It was a local user account without the necessary rights. I changed it to a local system account and the events started to flow in.   Thanks, Awni
Last question. How can i configure the UF as a receiver on 9997? 
Hi @karthi2809, I suppose that they mean getting data in and parsing, in other words the process to have data to index and use. e.g. if you read at https://docs.splunk.com/observability/en/gdi/othe... See more...
Hi @karthi2809, I suppose that they mean getting data in and parsing, in other words the process to have data to index and use. e.g. if you read at https://docs.splunk.com/observability/en/gdi/other-ingestion-methods/other-data-ingestion-methods.html they are speaking of methods to get data in. Ciao. Giuseppe  
Hi @Tumarbayev, yes, you have to configure the Intermediate Forwarder bot as receiver and forwarder. If you use an HF you can do all by GUI, if you use a UF, you have to condifure outputs.conf and ... See more...
Hi @Tumarbayev, yes, you have to configure the Intermediate Forwarder bot as receiver and forwarder. If you use an HF you can do all by GUI, if you use a UF, you have to condifure outputs.conf and inputs.conf; if you use a UF, remember to configure in limits.conf maxKBpm=0 otherwise you'll have queues issues. Then, in the target UFs, you have to configure outputs.conf to point to the intermediate Forwarder. Ciao. Giuseppe  
Thanks in Advance. I had call from one company and they asked you have experience in Splunk Ingestion. I thought is data onboarding from GUI right? or something different?
@PickleRick wrote: 3. [...] data onboarding includes finding the proper "main" timestamp within the event (some events can have multiple timestamps; in such case you need to decide which is the p... See more...
@PickleRick wrote: 3. [...] data onboarding includes finding the proper "main" timestamp within the event (some events can have multiple timestamps; in such case you need to decide which is the primary timestamp for the event) and making sure it's getting parsed out properly so that the event is indexed at the proper point in time That field is `my_timestamp` I'm currently using sourcetype `_json`, with the hope that I don't need to get into parsing data too much.  However, in order to fetch a latest metric by `my_timestamp` I guess I either somehow need to tell splunk that that field is a timestamp, or just treat this (ISO format) timestamp as a string for that purpose (since ISO strings do sort correctly)?  If the former, perhaps I need to define a new sourcetype? 4. Yes, latest(X) looks for the latest value of field X. It doesn't mind any other fields. So latest(X) and latest(Y) will show you latest seen values of respectably fields X and Y but they don't have to be from the same event. If one event had only field X, and other one had only field Y, you'd still get both of them in your results since either of them was the last occurrence of respsective field. How should I approach this to get a latest record, rather than the latest field?
you mean i should configure the Universal forwarder on receiving mode and then on the sender server configure output to configured forwarder? 
Hi @Tumarbayev, let me understand: are you speaking of an intermediate Forwarder that colelcts logs from other Universal Forwarder and sends them to indexers, is it correct? Are you speaking of an... See more...
Hi @Tumarbayev, let me understand: are you speaking of an intermediate Forwarder that colelcts logs from other Universal Forwarder and sends them to indexers, is it correct? Are you speaking of an Heavy or an Universal Forwarder? Anyway, youcan use as concentrator, both a Universal or an Heavy Forwarder, even if I usually use an HF. At first you have to configure your HF Concentrator to forwarder logs to the Indexers. Then you have to enable, on HF,  receiving on a port (default 9997). At least, you have to configure your target UFs to send their logs to the HF using the define port (9997). If you have HA requirements, it's better to have two HFs to avoid Single Points of Failure. Ciao. Giuseppe
We have 24 indexers in an indexer cluster. Recently the CPU usage is almost 100%, not on all the indexers but it fluctuates between the indexers. Under indexer clustering section, I can see the statu... See more...
We have 24 indexers in an indexer cluster. Recently the CPU usage is almost 100%, not on all the indexers but it fluctuates between the indexers. Under indexer clustering section, I can see the status going to "Pending" randomly between the indexers for few seconds. It is very continuous and also causes an increase in the number of fixup buckets.  I have restarted the indexer servers manually where I saw high CPU load, but it did not resolve the issue. What would be the best option to fix this and the possible root cause?  Any suggestions would be very helpful.  Thanks in advance!
Hello team.  My task is that universal forwarder should collect the events from other hosts and then do realy to main server. How can i do it? 
| makeresults | eval _time=strptime("6 Nov 2023","%d %b %Y") | eval _raw="OEM Model Type NCAPTest Honda Civic Sedan No Honda CR-V SUV Yes Honda Fit Hatchback No VW Jetta Sedan Yes VW Tiguan SUV Yes V... See more...
| makeresults | eval _time=strptime("6 Nov 2023","%d %b %Y") | eval _raw="OEM Model Type NCAPTest Honda Civic Sedan No Honda CR-V SUV Yes Honda Fit Hatchback No VW Jetta Sedan Yes VW Tiguan SUV Yes VW Golf Hatchback No Tata Harrier SUV Yes Tata Tiago Hatchback No Tata Altroz Hatchback No Kia Seltos SUV No Kia Forte Sedan No Kia Rio Hatchback No Hyundai Elantra Sedan No Hyundai Kona SUV Yes Hyundai i20 Hatchback No" | append [| makeresults | eval _time=strptime("13 Nov 2023","%d %b %Y") | eval _raw="OEM Model Type NCAPTest Honda Civic Sedan Yes Honda CR-V SUV Yes Honda Fit Hatchback No VW Jetta Sedan Yes VW Tiguan SUV Yes VW Golf Hatchback No Tata Harrier SUV Yes Tata Tiago Hatchback No Tata Altroz Hatchback Yes Kia Seltos SUV No Kia Forte Sedan Yes Kia Rio Hatchback Yes Hyundai Elantra Sedan No Hyundai Kona SUV Yes Hyundai i20 Hatchback No"] | append [| makeresults | eval _time=strptime("20 Nov 2023","%d %b %Y") | eval _raw="OEM Model Type NCAPTest Honda Civic Sedan Yes Honda CR-V SUV Yes Honda Fit Hatchback Yes VW Jetta Sedan Yes VW Tiguan SUV Yes VW Golf Hatchback Yes Tata Harrier SUV Yes Tata Tiago Hatchback Yes Tata Altroz Hatchback Yes Kia Seltos SUV Yes Kia Forte Sedan Yes Kia Rio Hatchback Yes Hyundai Elantra Sedan Yes Hyundai Kona SUV Yes Hyundai i20 Hatchback Yes"] | multikv forceheader=1 | table _time OEM Model Type NCAPTest ``` The lines above create sample events in line with your example ``` ``` Count total by time and OEM ``` | eventstats count as total by _time OEM ``` Count by time OEM total and test result ``` | stats count by _time OEM total NCAPTest ``` Determine percentages ``` | eval count=round(100*count/total,2) ``` Separate yes and no percentages ``` | eval {NCAPTest}=count ``` Gather no and yes percentages by time and OEM ``` | stats values(No) as No values(Yes) as Yes by _time OEM ``` Fill nulls with zero percent ``` | fillnull value=0.00 Yes No For visualisation, use column chart with trellis  
Hi PickleRick,   thanks for your insights. I am currently using multiple LDAP strategies and it is causing my alot of problems like Anton mentioned with the connection order. We are looking into po... See more...
Hi PickleRick,   thanks for your insights. I am currently using multiple LDAP strategies and it is causing my alot of problems like Anton mentioned with the connection order. We are looking into possibly transition from managing multiple LDAP strategies to "one LDAP strategy, many LDAP group" but that would require alot of man power on our end. Would love to hear your thoughts on managing LDAP strategies vs LDAP groups. 
Thank you for your reply. Yes, I have TA_nix installed and have uptime.sh input too. However, I would like to be alerted whenever the Ubuntu server is down. Could you please suggest how to do it?... See more...
Thank you for your reply. Yes, I have TA_nix installed and have uptime.sh input too. However, I would like to be alerted whenever the Ubuntu server is down. Could you please suggest how to do it?
hello ,  i have a problem i want to calculate a persoas coefficient to do correlation by the loop but i have a big issue . i have more than 23 fields and do the calculation manually making waste tim... See more...
hello ,  i have a problem i want to calculate a persoas coefficient to do correlation by the loop but i have a big issue . i have more than 23 fields and do the calculation manually making waste time a lot and i have a big syntax. someone know how i can get loop result without using MLTOOKIT.  | fields TotalCount, usages, licenseb, StorageMb, Role_number, Siglum_number, SourceTypeDescription_number *_number | eval sq_TotalCount = TotalCount * TotalCount | eval sq_usages = usages * usages | eval sq_licenseb = licenseb * licenseb | eval sq_StorageMb = StorageMb * StorageMb | eval sq_Role_number = Role_number * Role_number | eval sq_Siglum_number = Siglum_number * Siglum_number | eval sq_SourceTypeDescription_number = SourceTypeDescription_number * SourceTypeDescription_number | eval product_TotalCount_usages = TotalCount * usages | eval product_TotalCount_licenseb = TotalCount * licenseb | eval product_TotalCount_StorageMb = TotalCount * StorageMb | eval product_TotalCount_Role_number = TotalCount * Role_number | eval product_TotalCount_Siglum_number = TotalCount * Siglum_number | eval product_TotalCount_SourceTypeDescription_number = TotalCount * SourceTypeDescription_number | stats sum(TotalCount) as sum_TotalCount, sum(sq_TotalCount) as sum_sq_TotalCount, sum(usages) as sum_usages, sum(sq_usages) as sum_sq_usages, sum(licenseb) as sum_licenseb, sum(sq_licenseb) as sum_sq_licenseb, sum(StorageMb) as sum_StorageMb, sum(sq_StorageMb) as sum_sq_StorageMb, sum(Role_number) as sum_Role_number, sum(sq_Role_number) as sum_sq_Role_number, sum(Siglum_number) as sum_Siglum_number, sum(sq_Siglum_number) as sum_sq_Siglum_number, sum(SourceTypeDescription_number) as sum_SourceTypeDescription_number, sum(sq_SourceTypeDescription_number) as sum_sq_SourceTypeDescription_number sum(product_TotalCount_usages) as sum_TotalCount_usages, sum(product_TotalCount_licenseb) as sum_TotalCount_licenseb, sum(product_TotalCount_StorageMb) as sum_TotalCount_StorageMb, sum(product_TotalCount_Role_number) as sum_TotalCount_Role_number, sum(product_TotalCount_Siglum_number) as sum_TotalCount_Siglum_number, sum(product_TotalCount_SourceTypeDescription_number) as sum_TotalCount_SourceTypeDescription_number count as count | eval pearson_TotalCount_usages = ((count * sum_TotalCount_usages) - (sum_TotalCount * sum_usages)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_usages) - (sum_usages * sum_usages))), pearson_TotalCount_licenseb = ((count * sum_TotalCount_licenseb) - (sum_TotalCount * sum_licenseb)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_licenseb) - (sum_licenseb * sum_licenseb))), pearson_TotalCount_StorageMb = ((count * sum_TotalCount_StorageMb) - (sum_TotalCount * sum_StorageMb)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_StorageMb) - (sum_StorageMb * sum_StorageMb))), pearson_TotalCount_Role_number = ((count * sum_TotalCount_Role_number) - (sum_TotalCount * sum_Role_number)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_Role_number) - (sum_Role_number * sum_Role_number))), pearson_TotalCount_Siglum_number = ((count * sum_TotalCount_Siglum_number) - (sum_TotalCount * sum_Siglum_number)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_Siglum_number) - (sum_Siglum_number * sum_Siglum_number))), pearson_TotalCount_SourceTypeDescription_number = ((count * sum_TotalCount_SourceTypeDescription_number) - (sum_TotalCount * sum_SourceTypeDescription_number)) / (sqrt((count * sum_sq_TotalCount) - (sum_TotalCount * sum_TotalCount)) * sqrt((count * sum_sq_SourceTypeDescription_number) - (sum_SourceTypeDescription_number * sum_SourceTypeDescription_number))) | table pearson_TotalCount_usages, pearson_TotalCount_licenseb, pearson_TotalCount_StorageMb, pearson_TotalCount_Role_number, pearson_TotalCount_Siglum_number, pearson_TotalCount_SourceTypeDescription_number
Hi @rolypolytoyy, there's a requirement for the alerts to be visible in Alert Manager: Alerts must have a Global condivision level, otherwise they aren't visible. Are you alert shared at Global lev... See more...
Hi @rolypolytoyy, there's a requirement for the alerts to be visible in Alert Manager: Alerts must have a Global condivision level, otherwise they aren't visible. Are you alert shared at Global level? Ciao. Giueppe
There is also some summary information on _telemetry index.  index=_telemetry licenseGroup=Enterprise component=LicenseUsageSummary There is information for daily basis. Another option is to exten... See more...
There is also some summary information on _telemetry index.  index=_telemetry licenseGroup=Enterprise component=LicenseUsageSummary There is information for daily basis. Another option is to extend retention time for _internal. This is the only way if you wan to see that on 60 day and select different dimensions for log usage. 
The code for this issue is here: https://github.com/NathanDotTo/structurizr-onpremises/blob/main/structurizr-onpremises/Dockerfile_service I am using the AppD agent within a Tomcat based web app. Th... See more...
The code for this issue is here: https://github.com/NathanDotTo/structurizr-onpremises/blob/main/structurizr-onpremises/Dockerfile_service I am using the AppD agent within a Tomcat based web app. The agent directory is copied into the container unaltered from the original zip file: ENV APPDAGENTDIR=AppServerAgent-1.8-23.10.0.35234 ADD $APPDAGENTDIR /$APPDAGENTDIR RUN chown -R root /$APPDAGENTDIR RUN chmod -R a+rwx /$APPDAGENTDIR I start the web app with: ENV CATALINA_OPTS="-Xms512M -Xmx512M -javaagent:/AppServerAgent-1.8-23.10.0.35234/javaagent.jar" I get this error: >>>> MultiTenantAgent Dynamic Service error - could not open Dynamic Service Log /AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs/8b60cbc478b0/argentoDynamicService_11-27-2023-08.17.53.log Running as user root Cannot write to parent folder /AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs/8b60cbc478b0 Could NOT get owner for MultiTenantAgent Dynamic Services Folder Likely due to fact that owner (null) is not same user as the runtime user (root) which means you will need to give group write access using this command: find external-services/argentoDynamicService -type d -exec chmod g+w {} Possibly due to lack of permissions or file access to folder: Exists: false, CanRead: false, CanWrite: false Possibly due to lack of permissions or file access to log: Exists: false, CanRead: false, CanWrite: false Possibly due to java.security.Manager set - null Possibly due to missed agent-runtime-dir in Controller-XML and will need the property set to correct this... Call Stack: java.io.FileNotFoundException: /AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs/8b60cbc478b0/argentoDynamicService_11-27-2023-08.17.53.log (No such file or directory) From within the container I can see that the logs directory is owned by root: cd /AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs/ root@8b60cbc478b0:/AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs# ls -la total 16 drwxrwxrwx 1 root root 4096 Nov 27 08:20 . drwxrwxrwx 1 root root 4096 Oct 27 20:45 .. drwxr-x--- 2 root root 4096 Nov 27 08:20 Tomcat@8b60cbc478b0_8005 root@8b60cbc478b0:/AppServerAgent-1.8-23.10.0.35234/ver23.10.0.35234/logs# Since the logs directory is clearly owned by root, I suspect that the error message is simply misleading.  Any suggestions please? Many thanks Nathan