All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @adrifesa95, the question is: have you Enterprise Security or not? anyway, if there isn't Enterprise Security you can apply my solution. Ciao. Giuseppe
Hi @svodela, if you're sure that you applications haven't numbers in their name and that version is always in the format "nn.nn.nn", you could use a regex like the following to extract apps and vers... See more...
Hi @svodela, if you're sure that you applications haven't numbers in their name and that version is always in the format "nn.nn.nn", you could use a regex like the following to extract apps and versions and run a search like the following: <your_search> | rex "correlation-sit=(?<app>[A-Za-z]+)(?<version>\d+\.\d+\.\d+)" | table app version you can check the regex at https://regex101.com/r/FNieNJ/1 Ciao. Giuseppe
Ha (at myself). I've been using "splunk cmd btool" for over a decade and never noticed that "splunk btool" was added as a shortcut in 7.2.
Hello Everyone, I'm attempting to search for queries in Splunk Free Edition. However, it worked well for some time, and then I got the error "Search has been terminated." This is most likely due to ... See more...
Hello Everyone, I'm attempting to search for queries in Splunk Free Edition. However, it worked well for some time, and then I got the error "Search has been terminated." This is most likely due to a lack of recollection." This occurs rather frequently. I created a free AWS instance using the Linux platform. Please suggest any solutions for these problems. (I've included a screenshot for reference.)  
It's splunk cloud
Hi @adrifesa95, are you speaking of Splunk Enterprise or Enterprise Security? If Enterprise Security it's a very hard job to impement multitenancy because ES isn't multitenant by default. If in Sp... See more...
Hi @adrifesa95, are you speaking of Splunk Enterprise or Enterprise Security? If Enterprise Security it's a very hard job to impement multitenancy because ES isn't multitenant by default. If in Splunk Enterprise, you could create different alerts for each zone, working only on the indexes of that area and sending  mails only to users of that area. Ciao. Giuseppe
We are trying to create a dashboard to understand the usage of our application version something like shown below Application Name Version sgs 1.0.18   When we search for particular ind... See more...
We are trying to create a dashboard to understand the usage of our application version something like shown below Application Name Version sgs 1.0.18   When we search for particular index ""sgs1.0.18*" source="/data/wso2/api_manager/current/repository/logs/wso2carbon.log" we get below result. << uri="get api/mydetails/1.0.0/apime/employee-details?correlation-sit=sgs1.0.18u%26h%3d106", SERVICE_PREFIX="get api/mydetails/1.0.0/apime/employee-details?correlation-sit=sgs1.0.18u%26h%3d106", path="get api/mydetails/1.0.0/apime/employee-details?correlation-sit=sgs1.0.18u%26h%3d106", resourceMethod="get", HTTP_METHOD="get", resourceUri="api/mydetails/1.0.0/apime/employee-details?correlation-sit=sgs1.0.18u%26h%3d106" Could you please help us to give sample splunk query to achieve the results .   Thanks    
Hello Splunk Members, Need some help on below queries, -How many calls(read/writing) can we make in Splunk in a given time period(per second)? (Default setting in Splunk. Is it configurable? Max/Mi... See more...
Hello Splunk Members, Need some help on below queries, -How many calls(read/writing) can we make in Splunk in a given time period(per second)? (Default setting in Splunk. Is it configurable? Max/Min value and how is it caluculated) -How much data in a given time period? MB/GB? Is it changeable? Min/Max value -How fast we can make the next insertion? Is there a delay or is it simultaneous? Would this be causing any data loss if there is any connectivity failure or downtime? when using the Splunk enterprise in general and by using HEC method Is there any difference.   Thanks in Advance for
Good morning, I explain my casuistry, I have a Splunk tenant that belongs to a big company with sucusarles in three zones. Each branch should only see the data of its zone. The indexes are construct... See more...
Good morning, I explain my casuistry, I have a Splunk tenant that belongs to a big company with sucusarles in three zones. Each branch should only see the data of its zone. The indexes are constructed in the form, zone_technology, for example, eu_meraki. Knowing this, I have created a series of alerts, which are shared for all the areas, and search in all the indexes. How could I make that the warning email when the alert is triggered, only reaches the contacts of an area?   Thank you
Okay Thanks for the suggestions.. i will reach out to our exchange team and see if they can provide a solution and will post the outcome here. Thanks.  
@inventsekar @dtburrows3   Thank you both for your reply. I was trying to use the spath command but was failing in extraction. @dtburrows3 : The second method using for loop worked well. I am runn... See more...
@inventsekar @dtburrows3   Thank you both for your reply. I was trying to use the spath command but was failing in extraction. @dtburrows3 : The second method using for loop worked well. I am running this query against large set of query. Does using for loop and json functions has any limitation in that case ? Like results getting truncated and so ?
Thanks for your help.   In my project there is no Cluster master, Deployment Server have, can i user deployment server as Cluster master? SearchHead and Indexer are single instance.   Regards, ... See more...
Thanks for your help.   In my project there is no Cluster master, Deployment Server have, can i user deployment server as Cluster master? SearchHead and Indexer are single instance.   Regards, Vij  
you rock amt, this works a treat!
Please add this to your inputs.conf and restart Splunk Service on UF. crcSalt = <SOURCE>  and update the test log, then check if the Splunk indexer still have redundant logs.    regarding the "re... See more...
Please add this to your inputs.conf and restart Splunk Service on UF. crcSalt = <SOURCE>  and update the test log, then check if the Splunk indexer still have redundant logs.    regarding the "read from beginning", i was bit confused with the other topic today morning.. monitoring the archive files. more details here: https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/Monitorfilesanddirectories  
Hi @syaseensplunk, if this is a sample of the logs to filter, the regex in the transforms.conf doesn't match any event. You have to use a different regex tat has a match with the events, e.g. somet... See more...
Hi @syaseensplunk, if this is a sample of the logs to filter, the regex in the transforms.conf doesn't match any event. You have to use a different regex tat has a match with the events, e.g. something like this: REGEX = \"app\":\"splunk-kubernetes-objects\" or a different one, that you can test at https://regex101.com/r/GnkJqh/1  Ciao. Giuseppe
Sure, here is the configuration of my inputs.conf file [tcpout:// <ip-address>:<port>] [monitor://C:\Users\admin\Desktop\practicelogs.txt] disabled = 0 index = practicelogs sourcetype = pr... See more...
Sure, here is the configuration of my inputs.conf file [tcpout:// <ip-address>:<port>] [monitor://C:\Users\admin\Desktop\practicelogs.txt] disabled = 0 index = practicelogs sourcetype = practicelogs i didnt understand what yo meant by read from beginning. can you please elaborate on that, Thanks.
Hi @tahaahmed354  Looks like you may have mistakenly configured the read from beginning everytime.  To Troubleshoot this issue, could you please copy paste the inputs.conf from your windows UF (onl... See more...
Hi @tahaahmed354  Looks like you may have mistakenly configured the read from beginning everytime.  To Troubleshoot this issue, could you please copy paste the inputs.conf from your windows UF (only the required portion is enough, remove any sensitive values), thanks.   
Hi @Poojitha  The Splunk command "spath" enables you to extract information from the structured data formats XML and JSON. The command Ref doc link is: https://docs.splunk.com/Documentation/Splunk/... See more...
Hi @Poojitha  The Splunk command "spath" enables you to extract information from the structured data formats XML and JSON. The command Ref doc link is: https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Spath   Pls let us know if you are able to use the spath command (as seen in the previous reply) or you could use direct "rex" command extract field values and do the stats.   but, the spath is the simplest option i think. pls let us know if you are ok with spath or not, thanks. 
I am using a single universal forwarder on my windows machine to send a log file to my Splunk host machine deployed on Ubuntu.  The problem is that there were 3 logs events initially in the file, ... See more...
I am using a single universal forwarder on my windows machine to send a log file to my Splunk host machine deployed on Ubuntu.  The problem is that there were 3 logs events initially in the file, and splunk read those events and displayed on the dashboard. But when I appended the same file and added 10 more events manually, the dashboard is giving out 16 log events when there are only 13 events in the log file. its is reading the first three logs twice. How to resolve this issue?  
You can try something like this.       <base_search> | eval error=coalesce(spath(response, "errors{}"), spath(response, "errors")) | fields - response ``` extract variables fr... See more...
You can try something like this.       <base_search> | eval error=coalesce(spath(response, "errors{}"), spath(response, "errors")) | fields - response ``` extract variables from the error messages ``` | rex field=error "(?i)sub\s+\'(?<sub>[^\']+)\'" | rex field=error "(?i)product\s+id\s+(?<product_id>[^\s]+)" | rex field=error "(?i)location\s+id\s+(?<location_id>[^\s]+)" | rex field=error "(?i)datetime\s+(?<start_datetime>\w+\s+\d{4}(?:\-\d{2}){2}T\d{2}(?:\:\d{2}){2}(?:\+|\-)\d{2}\:\d{2})" ``` replace variables in the error messages to get a standardized set of error messages to do counts against ``` | eval error=replace(replace(replace(replace(error, "(?i)sub\s+\'([^\']+)\'", "sub '***'"), "(?i)product\s+id\s+([^\s]+)", "product id ***"), "(?i)location\s+id\s+([^\s]+)", "location id ***"), "(?i)datetime\s+(\w+\s+\d{4}(?:\-\d{2}){2}T\d{2}(?:\:\d{2}){2}(?:\+|\-)\d{2}\:\d{2})", "datetime ***") ``` stats aggregation to get counts of error messages ``` | stats count as count, values(sub) as sub, values(product_id) as product_id, values(location_id) as location_id, values(start_datetime) as start_datetime by error        Results should look something like this. You can see the counts next to the standardized error messages. Also went ahead and carried over all the variables that were replaced in error messages for context. You could also check out the cluster command as this will give you similar results without having to do all the extractions and replacements in inline SPL.     <base_search> | table _time, response | eval error=coalesce(spath(response, "errors{}"), spath(response, "errors")) | fields - response | cluster field=error t=0.4 showcount=true countfield=count      Results will look like this. The error messages aren't redacted but their counts do line up pretty well to the previous example so the clustering appears to work decently. You can read up more on the cluster command here. https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Cluster