All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If the DB connect is not supported with your current Splunk version, then plan to upgrade Splunk to the supported levels to do what you want. If its not supported, sometimes you can still install an... See more...
If the DB connect is not supported with your current Splunk version, then plan to upgrade Splunk to the supported levels to do what you want. If its not supported, sometimes you can still install and it may work, but you take this risk and that not advisable for production environments.   
Hey Giuseppe,   I followed all of your steps for HA configuration. Built a new Search head, master node, and new indexer. I enabled clustering and added the search head and newly built indexer to ... See more...
Hey Giuseppe,   I followed all of your steps for HA configuration. Built a new Search head, master node, and new indexer. I enabled clustering and added the search head and newly built indexer to the cluster. Once i added the once standalone splunk server to the cluster the splunk service wouldnt start. And now if fails when i try starting it. Any idea on why this would be?
Getting data in requires a number of steps and investigation work. Some high level notes/tips 1. The first thing you need to do is to determine what data you want from Cloudflare, they offer a numb... See more...
Getting data in requires a number of steps and investigation work. Some high level notes/tips 1. The first thing you need to do is to determine what data you want from Cloudflare, they offer a number of services right?. 2. Investigate what options they provide in getting the data you want, logs, API, syslog etc. 3. You then look and explore Splunkbase (type in Cloudflare) and see if there is a Add-on (this is what typically helps you collect the data) you will need to do some homework and find out if it meets your methods of getting the data from step 2. Once you have this you need to Deploy the TA as per the instructions and connect to the data source.
Hello, Thank you for the response I had taken captues, there's only 2 lines followed by an ACK and a FIN, ACK: TLSv1.2 Client Hello TLSv1.2 Server Hello, Certificate, Server Key Exchange, Server ... See more...
Hello, Thank you for the response I had taken captues, there's only 2 lines followed by an ACK and a FIN, ACK: TLSv1.2 Client Hello TLSv1.2 Server Hello, Certificate, Server Key Exchange, Server Hello Done TCP [ACK] TCP [FIN, ACK] I understood the issue is with Client certificate. Can you kindly help me answer the below: Where do I find the certificates that is used by TA-cisco-cloud-security-umbrella-addon in Splunk ? What is the path/location of the certificate store used by the TA-cisco-cloud-security-umbrella-addon ?
Can i keep my once standalone server now one of my indexers as the deployment server? Or do i need to designate another server as the deployment server?
Currently working on these steps. I have copied the indexes.conf from the standalone to the master node. Will i need to copy that config file to the new indexer as well?
This indicates that the SSL certificate is either missing from the certificate store or has expired in the add-on. Additionally, if the server is configured to use a self-signed or third-party c... See more...
This indicates that the SSL certificate is either missing from the certificate store or has expired in the add-on. Additionally, if the server is configured to use a self-signed or third-party certificate, it may not be included in the certificate store used by the add-on.
Hi @Jonathan.Wang, I found this existing post that talks about the same issue. Check it out and let me know if it helps. https://community.appdynamics.com/t5/Java-Java-Agent-Installation-JVM/Inst... See more...
Hi @Jonathan.Wang, I found this existing post that talks about the same issue. Check it out and let me know if it helps. https://community.appdynamics.com/t5/Java-Java-Agent-Installation-JVM/Install-Events-service-Error/m-p/52419
@ankitarath2011 Please have a look  https://www.splunk.com/en_us/blog/tips-and-tricks/collecting-docker-logs-and-stats-with-splunk.html?locale=en_us  https://www.tekstream.com/blog/containerization... See more...
@ankitarath2011 Please have a look  https://www.splunk.com/en_us/blog/tips-and-tricks/collecting-docker-logs-and-stats-with-splunk.html?locale=en_us  https://www.tekstream.com/blog/containerization-and-splunk-how-docker-and-splunk-work-together/ 
Hi @Srinath.S, I found this AppD Docs page: https://docs.appdynamics.com/appd/24.x/24.7/en/cisco-appdynamics-essentials/dashboards-and-reports Let me know if it helps with your question. 
Many thanks! I was troubleshooting why Splunk was not reading out the Security event log. After adding "NT Service\SplunkForwarder" to the "Event Log Readers" group, it finally works.
I'm trying to call the nslookupsearch custom command. All that does is an nslookup for an IP or computer name. But I'm trying to use it in a search because some of the data we get ingested doesn't co... See more...
I'm trying to call the nslookupsearch custom command. All that does is an nslookup for an IP or computer name. But I'm trying to use it in a search because some of the data we get ingested doesn't contain the information we need, so we implemented the custom command to be able to nslookup and populate a table with the data retrieved from the nslookupsearch. 
You could try something along these lines | makeresults format=csv data="index,1-Aug,8-Aug,15-Aug,22-Aug,29-Aug index1,5.76,5.528,5.645,7.666,6.783 index2,0.017,0.023,0.036,0.033,14.985 index3,2.333... See more...
You could try something along these lines | makeresults format=csv data="index,1-Aug,8-Aug,15-Aug,22-Aug,29-Aug index1,5.76,5.528,5.645,7.666,6.783 index2,0.017,0.023,0.036,0.033,14.985 index3,2.333,2.257,2.301,2.571,0.971 index4,2.235,1.649,2.01,2.339,2.336 index5,19.114,14.179,14.174,18.46,19.948" ``` the lines above simulate your data (without the calculations) ``` | untable index date size | eval date=strptime(date."-2024","%d-%b-%Y") | fieldformat date=strftime(date,"%F") | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size] | fields - date size relative_size | stats values(*) as * by index
Hi @ferdousfahim , I usually use this transformations at search time, but to apply them on Forwarders, you have to use INDEXED_EXTRACTIONS=CSV in props.conf, for more infos see at https://docs.splun... See more...
Hi @ferdousfahim , I usually use this transformations at search time, but to apply them on Forwarders, you have to use INDEXED_EXTRACTIONS=CSV in props.conf, for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Extractfieldsfromfileswithstructureddata Ciao. Giuseppe
Hi @MoeTaher , please try something like this: index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields -count | appemd [ | inputlookup complianc... See more...
Hi @MoeTaher , please try something like this: index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields -count | appemd [ | inputlookup compliance.csv | fields Solution Status ] | stats first(Status) AS Status BY Solution | outputlookup compliance.csv Ciao. Giuseppe
Thanks, it worked! All I have to do is convert it to a percentage and we're all good to go. I'll pass along the karma.
Hi @MK3 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and t... See more...
I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and the problem disappeared. The rest was already pretty fast. I found that there was a persistent queue enabled for Linux input/source in the "Alway On" mode. The persistent queue was not turned on for Windows Input/source. Windows logs were OK all the time. After turning it off for Linux data, the problem disappeared. I don't understand why the persistent queue behaves this way, but I don't have time to investigate further. Maybe it's a Cribl bug or a misunderstanding of functionality. The input queue is not required in the project, so I can leave it off. For me, it's currently resolved Thank you all for your help and your time
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change ... See more...
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change 15-Aug Aug 15 Change 22-Aug Aug 22 change 29-Aug Aug 29 change index1 5.76 5.528 96% 5.645 102% 7.666 136% 6.783 88% index2 0.017 0.023 135% 0.036 157% 0.033 92% 14.985 45409% index3 2.333 2.257 97% 2.301 102% 2.571 112% 0.971 38% index4 2.235 1.649 74% 2.01 122% 2.339 116% 2.336 100% index5 19.114 14.179 74% 14.174 100% 18.46 130% 19.948 108% I have a search that returns the values without the change calculations | loadjob savedsearch="me@email.com:splunk_instance_monitoring:30 Days Ingest By Index" | eval day_of_week=strftime(_time,"%a"), date=(strftime(_time,"%Y-%m-%d")) | search day_of_week=Tue | fields - _time day_of_week | transpose header_field=date | rename column AS index | sort index | addcoltotals label=Totals labelfield=index If the headers were something like "week 1" "week 2" I can get what I want, but with date headers that change very time, I've tried using foreach to iterate through and caclulate the changes from one column to the next but haven't been able to come up with the right solution.  Can anyone help?
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in th... See more...
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in the _internal or _audit indexes.