All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SN1  I would recommend running the following on our old SH to find out where the macro is easily:   /opt/splunk/bin/splunk btool macros list MacroName --debug   Replace MacroName with the na... See more...
Hi @SN1  I would recommend running the following on our old SH to find out where the macro is easily:   /opt/splunk/bin/splunk btool macros list MacroName --debug   Replace MacroName with the name of your missing macro - this should output a the configuration of the macro and include the path that the macro resides in. If you still do not see the macro there then it could be a private Knowledge Object. Did you copy you user's custom data from /opt/splunk/etc/users aswell? Did you copy all the apps from the old SH to the new SH? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @Cheng2Ready , if you need to exclude only the days following holidays, you approach is correct. if instead you need to exclude both the holidays and the one followind days, you have to implemen... See more...
Hi @Cheng2Ready , if you need to exclude only the days following holidays, you approach is correct. if instead you need to exclude both the holidays and the one followind days, you have to implement a mix between the two solutions with both the checks. let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@AstinSebastian  All apps uploaded to Splunkbase are automatically added to the Cloud Vetting queue. However, due to a significant backlog, apps that are not associated with customer installation re... See more...
@AstinSebastian  All apps uploaded to Splunkbase are automatically added to the Cloud Vetting queue. However, due to a significant backlog, apps that are not associated with customer installation requests and require manual vetting may remain in a pending state for an extended period. On the other hand, if your app has no manual_check results and no errors or failures, it can be automatically marked as compatible with one or both Splunk Cloud architectures, effectively bypassing the manual vetting queue. Of course, if your app encounters any errors or failures, you will receive an automated failure notification.
@KKuserIt appears that you might be operating two separate Splunk Cloud instances. Please have a look : https://community.splunk.com/t5/Deployment-Architecture/Search-Head-on-Splunk-Cloud/m-p/204981?... See more...
@KKuserIt appears that you might be operating two separate Splunk Cloud instances. Please have a look : https://community.splunk.com/t5/Deployment-Architecture/Search-Head-on-Splunk-Cloud/m-p/204981?utm_source=chatgpt.com 
How to find out whether both Splunk instances are connected or not?
How to find out whether both Splunk instances are connected or not?
hi @livehybrid , Thanks for the reply, Is there any way that i can schedule this export? Since ii have a tool which is scheduled to run every 1 hour.
@SN1When migrating from an old search head to a new one, it's essential to ensure that all configurations, including macros, are correctly transferred. However, if you're encountering issues such as ... See more...
@SN1When migrating from an old search head to a new one, it's essential to ensure that all configurations, including macros, are correctly transferred. However, if you're encountering issues such as missing macros after the migration, it indicates that some components may not have been properly moved. To address this, I recommend reaching out to Splunk Support for personalized assistance.  
@SN1  Locate Macros in the Old Search Head From the Splunk UI: Navigate to Settings > Advanced Search > Search Macros  
You need to clarify your constraints.  The most obvious solution is to send a field "environment" along with log events.  There are a million ways to do this. Then, if the deployment team is sympath... See more...
You need to clarify your constraints.  The most obvious solution is to send a field "environment" along with log events.  There are a million ways to do this. Then, if the deployment team is sympathetic to your course, they can name hosts according to environment in some way.  There is at least a dozen ways to do this. (One obvious way is to dedicate a special domain to environment.)  So, that's at least 1,000,012. You can also do an automatic lookup on hostname.  That's at least 1,000,013 ways.
so i copied enterprise security app folder from old sh to new but it is showing macro error not found where i can find the macro of this app and how to migrate them also.
Our application, Erasmith Add-on for WMI Exporter, is showing as Pending for both Victoria and Classicon in Splunkbase. Under the details, it indicates 2 failures, but the failure report is not avail... See more...
Our application, Erasmith Add-on for WMI Exporter, is showing as Pending for both Victoria and Classicon in Splunkbase. Under the details, it indicates 2 failures, but the failure report is not available. Additionally, during local cloud vetting, no errors or failures were observed. Could anyone guide me on what steps I should take next to resolve this issue?
Hello,   I am trying to replace the host value that is the UF with event data as the value.   ACME-001 PROD-MFS-003: status="200/0" srcip="1.0.0.1" user="a7bk28" dhost="http://test_web.net/conte... See more...
Hello,   I am trying to replace the host value that is the UF with event data as the value.   ACME-001 PROD-MFS-003: status="200/0" srcip="1.0.0.1" user="a7bk28" dhost="http://test_web.net/contents/content2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Music" rep="24" mt="image/jpeg" mlwr="-" app="-" bytes="601/274/31302/00012" ua="Mozilla/5.0 (webOS/1.3; U; en-US) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/1.0 Safari/525.27.1 Desktop/1.0" lat="0/0/05/14" rule="rule14 bad" url="http://test_web.com/page5/e.jpg?ee=ff&gg=hh"  ACME-001 PROD-POS-006: status="200/0" srcip="1.0.0.13" user="ItsEmeline" dhost="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Beauty" rep="21" mt="application/xml" mlwr="-" app="-" bytes="534/020/100/130" ua="Mozilla/5.0 (X11; Linux x86_64; rv:7.0a1) Gecko/20110623 Firefox/7.0a1" lat="0/10/026/105" rule="rule12 bad" url="http://test_web.net/contents/content2.jpg?ee=ff&gg=hh" ACME-001 is what I want to be placed in as the value for the host field. These are teh props and transforms that I am using.  props.conf [mcafee:wg:kv] TRANSFORMS-changehost = changehost SHOULD_LINEMERGE = false DATETIME_CONFIG = current transforms.conf [changehost] DEST_KEY = MetaData:Host REGEX = ^(?P<host>\S+) FORMAT = host::$1 I have also tried  ^(\S+) for the regex I have 1 SH, 1 CM, 2 IDX and 1 UF I have put the props and transforms in app and pushed them to the indexers from the CM. They are on both indexes in /opt/splunk/etc/peer-apps I have a TA that has the same sourcetype that I am using in props in my app. Im wondering if I should add the props and transforms to a local folder in the TA instead of having them in a separate app.  Any suggestions would be much appreciated. 
Hello, I have logs coming in with the host showing as the UF.  I want to replace the host value with some event data. Here is a sample of the data.  ACME-001 HOST-003: status="407/0" srcip="1.... See more...
Hello, I have logs coming in with the host showing as the UF.  I want to replace the host value with some event data. Here is a sample of the data.  ACME-001 HOST-003: status="407/0" srcip="1.0.0.2" user="VeroRivas" dhost="http://test_web.net/contents/content1.jpg?aa=bb&cc=dd" urlp="401" proto="HTTP/https" mtd="CONNECT" urlc="Movie" rep="2" mt="text/html" mlwr="-" app="-" bytes="001/0/0/3180" ua="Mozilla/5.0 (webOS/1.3; U; en-US) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/1.0 Safari/525.27.1 Desktop/1.0" lat="0/0/0/3" rule="rule1 ok" url="http://test_web.com/page3/c.jpg?ee=ff&gg=hh"  ACME-001 ops-sys-002: status="407/0" srcip="1.0.0.11" user="roisiningle" dhost="http://test_web.net/contents/content1.jpg?aa=bb&cc=dd" urlp="401" proto="HTTP/https" mtd="CONNECT" urlc="Food" rep="-2" mt="text/html" mlwr="-" app="-" bytes="206/0/0/0040" ua="Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:14.0) Gecko/20100101 Firefox/14.0.1" lat="0/0/0/1" rule="rule1 ok" url="http://test_web.com/page5/e.jpg?ee=ff&gg=hh"  ACME-001 BUSDEV-005: status="200/0" srcip="1.0.0.13" user="roonixr" dhost="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Advertisement" rep="-3" mt="application/javascript" mlwr="-" app="-" bytes="142/020/032/023" ua="Mozilla/5.0 (X11; U; SunOS sun4m; en-US; rv:1.4b) Gecko/20030517 Mozilla Firebird/0.6" lat="0/05/30/53" rule="rule8 good" url="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" ACME-001 is what I want to be used for the for the value of host. I am in a index cluster environment with 1 SH, CM, 2 IDX and 1 UF. I have pushed these props and transforms to the indexers with no success. The UF is still showing as the host value.  Props [mcafee:wg:kv] TRANSFORMS-changehost = changehost SHOULD_LINEMERGE = false DATETIME_CONFIG = current #TIME_PREFIX = #TIME_FORMAT = SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) #MAX_TIMESTAMP_LOOKAHEAD = TRUNCATE = 999999 EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) Transforms [changehost] DEST_KEY = MetaData:Host REGEX = ^(?P<host>\S+) FORMAT = host::$1 Any help would be much appreciated
Our team looks after 7 applications, we have 5 environments and each application sits on between 2 and 4 servers, depending on the environment. Each app instance has its own dedicated server, so i... See more...
Our team looks after 7 applications, we have 5 environments and each application sits on between 2 and 4 servers, depending on the environment. Each app instance has its own dedicated server, so in other words, given a hostname, you can figure out exactly which application and which environment it is for.   At the moment, if we want to search for the logs of one of the applications (app1) in UAT, and if this app has 4 servers in UAT, the only way we can do this is by using the following search parameters   source=*app1.log host=host1 OR host=host2 OR host=host3 OR host=host4   Sometimes we have a few different applications talking to each other, so we end up having to mention a long list of host names and this gets quite tedious.  We have a separate team that manages Splunk across the organisation.   Is there something we could be asking the Splunk team to do for us to make our searching easier? Is there something they could do that would result in us being able to do something like application=app1 environment=uat    instead of having to specify host names for the environment that we are interested in?   Our team would appreciate any suggestions that can make our work easier.   Thank you    
@gcusello  Thank you  I looked at your post as saw your_search | eval date=strftime(_time,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] | ... in this way you exclude all the... See more...
@gcusello  Thank you  I looked at your post as saw your_search | eval date=strftime(_time,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] | ... in this way you exclude all the events in the days contained in the lookup. So now the question is  I am using this lookup file to say  Do not alert on these dates in the lookup but we need to +1 day on them so lets say the lookup table is   2025-02-17th We would need to add 1 day to it so now its actually Muting on the 18th if that make sense? to simplify, the lookup table Dates we just need to +1 day  to it and make sure on those dates, we just mute alert would it look like this? your_search | eval date=strftime(_time + 86400,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] |... Also is there a difference using from inputlookup vs lookup? All the best!
hi any update on this from anyone ? Thank you!
I believe this only applies to how Splunk Web (UI) interacts with splunkd not how direct REST API calls are made to splunkd on port 8089.  I am trying to determine if I should just use a client si... See more...
I believe this only applies to how Splunk Web (UI) interacts with splunkd not how direct REST API calls are made to splunkd on port 8089.  I am trying to determine if I should just use a client side timeout for the endpoint call.
@prasanthkota did you get this working? I am working on a custom function to convert the splunk query result in the vaultid to csv and would like to know if there is one that already exists for this?... See more...
@prasanthkota did you get this working? I am working on a custom function to convert the splunk query result in the vaultid to csv and would like to know if there is one that already exists for this? what was your end custom function?
Thats right - Users in the support portal are completely separate to your actual Splunk Instance and are not automatically setup - there is no link between them. Infact you can have users on your Sup... See more...
Thats right - Users in the support portal are completely separate to your actual Splunk Instance and are not automatically setup - there is no link between them. Infact you can have users on your Support portal which do not have a login to your Splunk instance if appropriate.