All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, I am monitoring blucoat proxy logs via syslog log collection method. My input.conf file is configured to read all logs inside the location opt/splunk/syslog/symantec/bluecoat/*/*.log.  bel... See more...
Hi Team, I am monitoring blucoat proxy logs via syslog log collection method. My input.conf file is configured to read all logs inside the location opt/splunk/syslog/symantec/bluecoat/*/*.log.  below is the current configuration. Now i need to exclude the log which have cs-host=nxtengine.cpga.net.qa from indexing.  [monitor:///opt/splunk/syslog/symantec/bluecoat/*/*.log] sourcetype = bluecoat:proxysg:access:syslog index = cus_XXX host_segment = 6 disabled = false Sample raw logs below 2024-08-07T14:12:37+03:00 10.253.253.44 Bluecoat|src=X.x.x.x|srcport=53936|dst=x.x.x.x|dstport=8443|username=abcdef$|devicetime=[07/08/2024:11:12:32 GMT]|s-action=TCP_DENIED|sc-status=407|cs-method=CONNECT|time-taken=11|sc-bytes=247|cs-bytes=816|cs-uri-scheme=tcp|cs-host=nxtengine.cpga.net.qa|cs-uri-path=/|cs-uri-query=-|cs-uri-extension=-|cs-auth-group=-|rs(Content-Type)=-|cs(User-Agent)=Mozilla/5.0|cs(Referer)=-|sc-filter-result=DENIED|filter-category=none|cs-uri=tcp://nxtengine.cpga.net.qa:8443/            
Is there a 1-2-1 relationship between host and device name? Could you use a lookup after the tstats? What does your stats search look like? Perhaps there may be ways to optimise it.
It looks like, for trellis pie charts, you have to calculate the values as percentages i.e. each row should add up to 100. Since you already have the free percentage you can simply calculate the used... See more...
It looks like, for trellis pie charts, you have to calculate the values as percentages i.e. each row should add up to 100. Since you already have the free percentage you can simply calculate the used percentage.
Hello Everyone,   I'm experiencing a problem with the latest version of Missile Map (1.6.0). The animated arrows remain static when the page initially loads, and the animations only begin when I ma... See more...
Hello Everyone,   I'm experiencing a problem with the latest version of Missile Map (1.6.0). The animated arrows remain static when the page initially loads, and the animations only begin when I manually zoom in or out of the map.   This is an issue, as the animations used to start automatically as soon as the dashboard page was loaded.   Thank you for your assistance.
Hi @suvi6789 , Only for test, please try this: index="abc" | stats count(eval(searchmatch("error 1234"))) AS "Error1" count(eval(searchmatch("error 567"))) AS "Error12" count(eval(... See more...
Hi @suvi6789 , Only for test, please try this: index="abc" | stats count(eval(searchmatch("error 1234"))) AS "Error1" count(eval(searchmatch("error 567"))) AS "Error12" count(eval(searchmatch("error 89"))) AS "Error3" the issue is probably on the data, you must analyze them Ciao. Giuseppe
Thank you for your response If I comment the first search index="abc"  | eval JobName= case( ```searchmatch("error 1234"), "Error1",``` searchmatch("error 567"), "Error2", searchmatch("error ... See more...
Thank you for your response If I comment the first search index="abc"  | eval JobName= case( ```searchmatch("error 1234"), "Error1",``` searchmatch("error 567"), "Error2", searchmatch("error 89"), "Error3" ) Now, the result is  Error2 - 125
If I comment  index="abc"  | eval JobName= case( ```searchmatch("error 1234"), "Error1",``` searchmatch("error 567"), "Error2", searchmatch("error 89"), "Error3" ) Now, the result is  Error... See more...
If I comment  index="abc"  | eval JobName= case( ```searchmatch("error 1234"), "Error1",``` searchmatch("error 567"), "Error2", searchmatch("error 89"), "Error3" ) Now, the result is  Error2 - 125
Hi @suvi6789 , the search is correct, are you sure about the strings to search for Error 2 and 3? Only for debugging, please change the order of searchmatch in the eval. Ciao. Giuseppe
Thanks for the response My Bad, the parenthesis are wrong. I have ran the query with the right paranthesis. It was a typo. index="abc"  | eval JobName= case( searchmatch("error 1234"), "Error1", ... See more...
Thanks for the response My Bad, the parenthesis are wrong. I have ran the query with the right paranthesis. It was a typo. index="abc"  | eval JobName= case( searchmatch("error 1234"), "Error1", searchmatch("error 567"), "Error2", searchmatch("error 89"), "Error3" ) | stats count by JobName Output says  Error1 - 234 (234 is the count of error) though error 2 and error 3 are there, It is not listing in the results. 
Hi @suvi6789 , parenthesis are wrong and if Error1,2 and3 are strings, use quotes: index="abc" | eval JobName= case( searchmatch("error 1234"), "Error1", searchmatch("error 567"), "Error2", search... See more...
Hi @suvi6789 , parenthesis are wrong and if Error1,2 and3 are strings, use quotes: index="abc" | eval JobName= case( searchmatch("error 1234"), "Error1", searchmatch("error 567"), "Error2", searchmatch("error 89"), "Error3" ) | stats count by JobName
Hi, Can anyone please help me to frame the SPL script. I have to collect the list of devices reporting in splunk along with the indexname. For that I am using tstats command. |  tstats count where ... See more...
Hi, Can anyone please help me to frame the SPL script. I have to collect the list of devices reporting in splunk along with the indexname. For that I am using tstats command. |  tstats count where index=* by host,index  Now the problem is, for an index the device name is under fieldname 'asset'.  To get such list from this index, I can't able to use tstats command since it works only for metafields. I tried using stats command but it is taking very long time which is impacting the performance. Please suggest me how should I frame the query in efficient manner for this case. Thanks
Hi,  I have doing a list of different searches and want the count of each searches.  So, I was using the searchmatch command but when using it I get only the first result that is successfully searc... See more...
Hi,  I have doing a list of different searches and want the count of each searches.  So, I was using the searchmatch command but when using it I get only the first result that is successfully searches and it ignore the rest For example: index="abc"  | eval JobName= case( searchmatch("error 1234", Error1), searchmatch("error 567", Error2), searchmatch("error 89", Error3) ) | stats count by JobName Output says  Error1 - 234 (234 is the count of error) though error 2 and error 3 are there, It is not listing in the results.  Please could you suggest on how to get this sorted  
Hello everyone, I am encountering an issue with sending emails for the alerts I have configured on Splunk. Here are the steps I followed: SMTP Server Configuration: I set up an SMTP server usin... See more...
Hello everyone, I am encountering an issue with sending emails for the alerts I have configured on Splunk. Here are the steps I followed: SMTP Server Configuration: I set up an SMTP server using Postfix on a virtual machine (VM). I also configured the firewall on this VM to allow SMTP traffic. Splunk Configuration: In Splunk, I configured the email server settings using my Postfix server information. I verified the settings under Settings -> Server settings -> Email settings, and everything seems correct. Alert Configuration: I created several alerts and configured the "Send Email" action for each alert. I provided the recipients, subject, and email content. Despite these configurations, I am not receiving any emails when the alerts are triggered. Additional Details: I tested sending emails from the command line on the VM with Postfix, and it works correctly. I checked Splunk logs (splunkd.log) and did not find any obvious errors related to email sending. Postfix logs show that email requests do not seem to be reaching the server. Questions: Are there any additional steps I might have missed in the Splunk configuration for sending emails? How can I diagnose why emails are not being sent from Splunk? Are there specific logs or configurations I should check again? Thank you in advance for your help!
I get the same Error message but i dont know what to do. Do anyone have a soltuion for that Problem?
i'm getting the exact same Error. Do anyone have a Solution for that Problem?
Yuanliu, Thanks for the info and I will look into that and respond with my finding.
sadly no
Yes, you can copy the URL, decode the URL parameters, and paste it into a new search, but clicking on a bookmarklet is more convenient for me. If decoding your query due to the 414 error is a common... See more...
Yes, you can copy the URL, decode the URL parameters, and paste it into a new search, but clicking on a bookmarklet is more convenient for me. If decoding your query due to the 414 error is a common occurrence, you could also make a CyberChef recipe to help. I don't know how much work it would take to make a bookmarklet that would POST the AST to the server instead. I understand that your search has a large number of calculations, but you can use a macro to make the URL shorter.  index=test example.com | `complex_calculations` | `get_geoip_data(src_ip)` | `multiple_stats_commands` In that case, each macro can contain a very large number of commands. When possible, I create macros that are reusable, but that is not always appropriate. In particular, Splunk Enterprise Security content includes a separate filter macro for each Correlation Search so that false positives can be tuned out without editing the detection core logic. Without access to your search query, it is difficult to know how to make the search smaller. In a Windows browser, you can press Ctrl-Shift-E when writing your search to show the "Expanded Search String" with the content in all of the macros being shown. These are a couple examples of how I've moved long parsing and calculation strings to macros: get_datamodel_desc(1) entropy_digits_lowercase(1)  (the Decrypt2 app is better than this macro)
Hi, so like in the screenshot - but here it is again:  |mstats max ("% Free Space") as "MB", max("Free Megabytes") as "FreeMB" WHERE index=m_windows_perfmon AND host=NTSAP10 span=1d by instance... See more...
Hi, so like in the screenshot - but here it is again:  |mstats max ("% Free Space") as "MB", max("Free Megabytes") as "FreeMB" WHERE index=m_windows_perfmon AND host=NTSAP10 span=1d by instance |search instance!=hard* |search instance!=_Total |eval FreeDiskspace=round(FreeMB/1024,2) |eval TotalDiskspace=round((FreeDiskspace/MB)*100,2) |eval FullDiskspace=round(TotalDiskspace-FreeDiskspace,2) |stats max("FreeDiskspace") as "Free Diskspace (GB)", max("FullDiskspace") as "Full Diskspace (GB)" by instance     so it's metrics I'm trying to use for it . The free space and free megabytes metrics from my windows perfmon index. I exclude instances that have hard or total in it and then eval three versions of the diskspace. this way I have the free diskspace in gb, the total diskspace and the full, so used, diskspace.  the free and the full(used) diskspace are the ones I'm having in the table, again as seen above, but when I try the pie chart it shows me not what I am looking for.  I'd like to have pie charts for each instance which lets the pie chart show the free and used space together. right now it only shows me either of two things: - only the free or the full space per instance - all free spaces from all instances in one pie chart     
Hi @haleyh44 , one additional information: do you want HA on your data or not? to have HA you need to create an Indexer Cluster, that requires an additiona machine (Cluster Manager) that cannot be ... See more...
Hi @haleyh44 , one additional information: do you want HA on your data or not? to have HA you need to create an Indexer Cluster, that requires an additiona machine (Cluster Manager) that cannot be one of the others. Anyway, the two new machines have different requirements, in terms of Disk Space: the new Indexers should have the same storage of the old server. If you don't want HA, you have to: install Splunk on the two new servers, copy the indexes.conf and the Technology Add-Ons from the old server to one of the other two that will be one of the two Indexers, copy all the apps from the old server to the new server that will be the Search Head, configure the Search head for a Distributed Search as described in the links shared by @JohnEGones  If you want HA, you have to: install Splunk on three new servers, Configure an Indexers Cluster on the old server and two of the new ones, copy the indexes.conf and the Technology Add-Ons from the old server to the Cluster Manager, copy all the apps from the old server to the new server that will be the Search Head, configure the Search head for a Distributed Search as described in the links shared by @JohnEGones  for more infos about Splunk architectures see at https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf Ciao. Giuseppe