All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @larrydavid , the easiest approach is to create a lookup (eventually an automatic one!) containing the combinations of apps and hosts to define the environments, so you can use the lookup in your... See more...
Hi @larrydavid , the easiest approach is to create a lookup (eventually an automatic one!) containing the combinations of apps and hosts to define the environments, so you can use the lookup in your searches, something like this: environment app host env1 app1 host1 env1 app1 host2 env1 app1 host3 env2 app2 host4 env2 app2 host5 env2 app2 host6 env3 app3 host7 env3 app3 host8 env3 app3 host9 One additiona question: if each application uses some servers and there's a relation 1:n between apps and hosts, why you don't use only apps to define your environment? then, remember the there's the IN() operator to use instead of OR: source=*app1.log host IN (host1,host2,host3,host4) it's smaller! Ciao. Giuseppe 
Hi @anooshac  If you want to run this on a schedule then you might want to look at putting this into a Bash script and running as a cronjob.  Once you have a working CURL command, add this into a b... See more...
Hi @anooshac  If you want to run this on a schedule then you might want to look at putting this into a Bash script and running as a cronjob.  Once you have a working CURL command, add this into a bash script, ensure it is executable (chmod +x) and then add to your user's cron (crontab -e) To run hourly you would do something like 1 * * * * which would run at 1 minute past each hour. This assumes you are running a Linux system. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @KKuser  If you are running Splunk Cloud then you might find you have multiple SH, this would mean the addresses are something like es-<stackName>.splunkcloud.com and itsi-<stackName>.splunkcloud... See more...
Hi @KKuser  If you are running Splunk Cloud then you might find you have multiple SH, this would mean the addresses are something like es-<stackName>.splunkcloud.com and itsi-<stackName>.splunkcloud.com - In this example they are part of the same deployment. However, there are other ways that Splunk deployments can be configured and connected, such as multiple SH/SHC as search peers on a single or multisite cluster if on-premise. These SH can be independent to each other but ultimately connect to the same indexers.  You can also setup federated search between different instances so they can search the same data. Either way, in these cases users are typically configured independently. It would be good to understand what you are trying to do, or what information you're trying to pull together, along with any other info you have (e.g. is this a Splunk Cloud, or on-premise deployment)?  Then I might be able to tailor the advise further. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
@daniedoe  You're correct. The splunkdConnectionTimeout setting in web.conf primarily affects how Splunk Web (UI) interacts with splunkd. For direct REST API calls made to splunkd on port 8089, the ... See more...
@daniedoe  You're correct. The splunkdConnectionTimeout setting in web.conf primarily affects how Splunk Web (UI) interacts with splunkd. For direct REST API calls made to splunkd on port 8089, the timeout behavior can be different. If you need more detailed information, you can refer to the Splunk REST API Solved: How do I change the REST API execution timeout? - Splunk Community  Access endpoint descriptions - Splunk Documentation
Hi @kiran_panchavat , adding a bit of information the the perfect answer of @kiran_panchavat: it's always a best practice to save all the customizations that you did in ES in a custom app, e.g. cus... See more...
Hi @kiran_panchavat , adding a bit of information the the perfect answer of @kiran_panchavat: it's always a best practice to save all the customizations that you did in ES in a custom app, e.g. custom field extractions, custom correlation searches or dashboards or reports, or, as in your case, macros: don't leave anything custom in the Enterprise Security (and the other module) app. Ciao. Giuseppe
Hi @SN1  I would recommend running the following on our old SH to find out where the macro is easily:   /opt/splunk/bin/splunk btool macros list MacroName --debug   Replace MacroName with the na... See more...
Hi @SN1  I would recommend running the following on our old SH to find out where the macro is easily:   /opt/splunk/bin/splunk btool macros list MacroName --debug   Replace MacroName with the name of your missing macro - this should output a the configuration of the macro and include the path that the macro resides in. If you still do not see the macro there then it could be a private Knowledge Object. Did you copy you user's custom data from /opt/splunk/etc/users aswell? Did you copy all the apps from the old SH to the new SH? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @Cheng2Ready , if you need to exclude only the days following holidays, you approach is correct. if instead you need to exclude both the holidays and the one followind days, you have to implemen... See more...
Hi @Cheng2Ready , if you need to exclude only the days following holidays, you approach is correct. if instead you need to exclude both the holidays and the one followind days, you have to implement a mix between the two solutions with both the checks. let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@AstinSebastian  All apps uploaded to Splunkbase are automatically added to the Cloud Vetting queue. However, due to a significant backlog, apps that are not associated with customer installation re... See more...
@AstinSebastian  All apps uploaded to Splunkbase are automatically added to the Cloud Vetting queue. However, due to a significant backlog, apps that are not associated with customer installation requests and require manual vetting may remain in a pending state for an extended period. On the other hand, if your app has no manual_check results and no errors or failures, it can be automatically marked as compatible with one or both Splunk Cloud architectures, effectively bypassing the manual vetting queue. Of course, if your app encounters any errors or failures, you will receive an automated failure notification.
@KKuserIt appears that you might be operating two separate Splunk Cloud instances. Please have a look : https://community.splunk.com/t5/Deployment-Architecture/Search-Head-on-Splunk-Cloud/m-p/204981?... See more...
@KKuserIt appears that you might be operating two separate Splunk Cloud instances. Please have a look : https://community.splunk.com/t5/Deployment-Architecture/Search-Head-on-Splunk-Cloud/m-p/204981?utm_source=chatgpt.com 
How to find out whether both Splunk instances are connected or not?
How to find out whether both Splunk instances are connected or not?
hi @livehybrid , Thanks for the reply, Is there any way that i can schedule this export? Since ii have a tool which is scheduled to run every 1 hour.
@SN1When migrating from an old search head to a new one, it's essential to ensure that all configurations, including macros, are correctly transferred. However, if you're encountering issues such as ... See more...
@SN1When migrating from an old search head to a new one, it's essential to ensure that all configurations, including macros, are correctly transferred. However, if you're encountering issues such as missing macros after the migration, it indicates that some components may not have been properly moved. To address this, I recommend reaching out to Splunk Support for personalized assistance.  
@SN1  Locate Macros in the Old Search Head From the Splunk UI: Navigate to Settings > Advanced Search > Search Macros  
You need to clarify your constraints.  The most obvious solution is to send a field "environment" along with log events.  There are a million ways to do this. Then, if the deployment team is sympath... See more...
You need to clarify your constraints.  The most obvious solution is to send a field "environment" along with log events.  There are a million ways to do this. Then, if the deployment team is sympathetic to your course, they can name hosts according to environment in some way.  There is at least a dozen ways to do this. (One obvious way is to dedicate a special domain to environment.)  So, that's at least 1,000,012. You can also do an automatic lookup on hostname.  That's at least 1,000,013 ways.
so i copied enterprise security app folder from old sh to new but it is showing macro error not found where i can find the macro of this app and how to migrate them also.
Our application, Erasmith Add-on for WMI Exporter, is showing as Pending for both Victoria and Classicon in Splunkbase. Under the details, it indicates 2 failures, but the failure report is not avail... See more...
Our application, Erasmith Add-on for WMI Exporter, is showing as Pending for both Victoria and Classicon in Splunkbase. Under the details, it indicates 2 failures, but the failure report is not available. Additionally, during local cloud vetting, no errors or failures were observed. Could anyone guide me on what steps I should take next to resolve this issue?
Hello,   I am trying to replace the host value that is the UF with event data as the value.   ACME-001 PROD-MFS-003: status="200/0" srcip="1.0.0.1" user="a7bk28" dhost="http://test_web.net/conte... See more...
Hello,   I am trying to replace the host value that is the UF with event data as the value.   ACME-001 PROD-MFS-003: status="200/0" srcip="1.0.0.1" user="a7bk28" dhost="http://test_web.net/contents/content2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Music" rep="24" mt="image/jpeg" mlwr="-" app="-" bytes="601/274/31302/00012" ua="Mozilla/5.0 (webOS/1.3; U; en-US) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/1.0 Safari/525.27.1 Desktop/1.0" lat="0/0/05/14" rule="rule14 bad" url="http://test_web.com/page5/e.jpg?ee=ff&gg=hh"  ACME-001 PROD-POS-006: status="200/0" srcip="1.0.0.13" user="ItsEmeline" dhost="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Beauty" rep="21" mt="application/xml" mlwr="-" app="-" bytes="534/020/100/130" ua="Mozilla/5.0 (X11; Linux x86_64; rv:7.0a1) Gecko/20110623 Firefox/7.0a1" lat="0/10/026/105" rule="rule12 bad" url="http://test_web.net/contents/content2.jpg?ee=ff&gg=hh" ACME-001 is what I want to be placed in as the value for the host field. These are teh props and transforms that I am using.  props.conf [mcafee:wg:kv] TRANSFORMS-changehost = changehost SHOULD_LINEMERGE = false DATETIME_CONFIG = current transforms.conf [changehost] DEST_KEY = MetaData:Host REGEX = ^(?P<host>\S+) FORMAT = host::$1 I have also tried  ^(\S+) for the regex I have 1 SH, 1 CM, 2 IDX and 1 UF I have put the props and transforms in app and pushed them to the indexers from the CM. They are on both indexes in /opt/splunk/etc/peer-apps I have a TA that has the same sourcetype that I am using in props in my app. Im wondering if I should add the props and transforms to a local folder in the TA instead of having them in a separate app.  Any suggestions would be much appreciated. 
Hello, I have logs coming in with the host showing as the UF.  I want to replace the host value with some event data. Here is a sample of the data.  ACME-001 HOST-003: status="407/0" srcip="1.... See more...
Hello, I have logs coming in with the host showing as the UF.  I want to replace the host value with some event data. Here is a sample of the data.  ACME-001 HOST-003: status="407/0" srcip="1.0.0.2" user="VeroRivas" dhost="http://test_web.net/contents/content1.jpg?aa=bb&cc=dd" urlp="401" proto="HTTP/https" mtd="CONNECT" urlc="Movie" rep="2" mt="text/html" mlwr="-" app="-" bytes="001/0/0/3180" ua="Mozilla/5.0 (webOS/1.3; U; en-US) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/1.0 Safari/525.27.1 Desktop/1.0" lat="0/0/0/3" rule="rule1 ok" url="http://test_web.com/page3/c.jpg?ee=ff&gg=hh"  ACME-001 ops-sys-002: status="407/0" srcip="1.0.0.11" user="roisiningle" dhost="http://test_web.net/contents/content1.jpg?aa=bb&cc=dd" urlp="401" proto="HTTP/https" mtd="CONNECT" urlc="Food" rep="-2" mt="text/html" mlwr="-" app="-" bytes="206/0/0/0040" ua="Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:14.0) Gecko/20100101 Firefox/14.0.1" lat="0/0/0/1" rule="rule1 ok" url="http://test_web.com/page5/e.jpg?ee=ff&gg=hh"  ACME-001 BUSDEV-005: status="200/0" srcip="1.0.0.13" user="roonixr" dhost="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Advertisement" rep="-3" mt="application/javascript" mlwr="-" app="-" bytes="142/020/032/023" ua="Mozilla/5.0 (X11; U; SunOS sun4m; en-US; rv:1.4b) Gecko/20030517 Mozilla Firebird/0.6" lat="0/05/30/53" rule="rule8 good" url="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" ACME-001 is what I want to be used for the for the value of host. I am in a index cluster environment with 1 SH, CM, 2 IDX and 1 UF. I have pushed these props and transforms to the indexers with no success. The UF is still showing as the host value.  Props [mcafee:wg:kv] TRANSFORMS-changehost = changehost SHOULD_LINEMERGE = false DATETIME_CONFIG = current #TIME_PREFIX = #TIME_FORMAT = SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) #MAX_TIMESTAMP_LOOKAHEAD = TRUNCATE = 999999 EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) Transforms [changehost] DEST_KEY = MetaData:Host REGEX = ^(?P<host>\S+) FORMAT = host::$1 Any help would be much appreciated
Our team looks after 7 applications, we have 5 environments and each application sits on between 2 and 4 servers, depending on the environment. Each app instance has its own dedicated server, so i... See more...
Our team looks after 7 applications, we have 5 environments and each application sits on between 2 and 4 servers, depending on the environment. Each app instance has its own dedicated server, so in other words, given a hostname, you can figure out exactly which application and which environment it is for.   At the moment, if we want to search for the logs of one of the applications (app1) in UAT, and if this app has 4 servers in UAT, the only way we can do this is by using the following search parameters   source=*app1.log host=host1 OR host=host2 OR host=host3 OR host=host4   Sometimes we have a few different applications talking to each other, so we end up having to mention a long list of host names and this gets quite tedious.  We have a separate team that manages Splunk across the organisation.   Is there something we could be asking the Splunk team to do for us to make our searching easier? Is there something they could do that would result in us being able to do something like application=app1 environment=uat    instead of having to specify host names for the environment that we are interested in?   Our team would appreciate any suggestions that can make our work easier.   Thank you