All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@SN1  Locate Macros in the Old Search Head From the Splunk UI: Navigate to Settings > Advanced Search > Search Macros  
You need to clarify your constraints.  The most obvious solution is to send a field "environment" along with log events.  There are a million ways to do this. Then, if the deployment team is sympath... See more...
You need to clarify your constraints.  The most obvious solution is to send a field "environment" along with log events.  There are a million ways to do this. Then, if the deployment team is sympathetic to your course, they can name hosts according to environment in some way.  There is at least a dozen ways to do this. (One obvious way is to dedicate a special domain to environment.)  So, that's at least 1,000,012. You can also do an automatic lookup on hostname.  That's at least 1,000,013 ways.
so i copied enterprise security app folder from old sh to new but it is showing macro error not found where i can find the macro of this app and how to migrate them also.
Our application, Erasmith Add-on for WMI Exporter, is showing as Pending for both Victoria and Classicon in Splunkbase. Under the details, it indicates 2 failures, but the failure report is not avail... See more...
Our application, Erasmith Add-on for WMI Exporter, is showing as Pending for both Victoria and Classicon in Splunkbase. Under the details, it indicates 2 failures, but the failure report is not available. Additionally, during local cloud vetting, no errors or failures were observed. Could anyone guide me on what steps I should take next to resolve this issue?
Hello,   I am trying to replace the host value that is the UF with event data as the value.   ACME-001 PROD-MFS-003: status="200/0" srcip="1.0.0.1" user="a7bk28" dhost="http://test_web.net/conte... See more...
Hello,   I am trying to replace the host value that is the UF with event data as the value.   ACME-001 PROD-MFS-003: status="200/0" srcip="1.0.0.1" user="a7bk28" dhost="http://test_web.net/contents/content2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Music" rep="24" mt="image/jpeg" mlwr="-" app="-" bytes="601/274/31302/00012" ua="Mozilla/5.0 (webOS/1.3; U; en-US) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/1.0 Safari/525.27.1 Desktop/1.0" lat="0/0/05/14" rule="rule14 bad" url="http://test_web.com/page5/e.jpg?ee=ff&gg=hh"  ACME-001 PROD-POS-006: status="200/0" srcip="1.0.0.13" user="ItsEmeline" dhost="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Beauty" rep="21" mt="application/xml" mlwr="-" app="-" bytes="534/020/100/130" ua="Mozilla/5.0 (X11; Linux x86_64; rv:7.0a1) Gecko/20110623 Firefox/7.0a1" lat="0/10/026/105" rule="rule12 bad" url="http://test_web.net/contents/content2.jpg?ee=ff&gg=hh" ACME-001 is what I want to be placed in as the value for the host field. These are teh props and transforms that I am using.  props.conf [mcafee:wg:kv] TRANSFORMS-changehost = changehost SHOULD_LINEMERGE = false DATETIME_CONFIG = current transforms.conf [changehost] DEST_KEY = MetaData:Host REGEX = ^(?P<host>\S+) FORMAT = host::$1 I have also tried  ^(\S+) for the regex I have 1 SH, 1 CM, 2 IDX and 1 UF I have put the props and transforms in app and pushed them to the indexers from the CM. They are on both indexes in /opt/splunk/etc/peer-apps I have a TA that has the same sourcetype that I am using in props in my app. Im wondering if I should add the props and transforms to a local folder in the TA instead of having them in a separate app.  Any suggestions would be much appreciated. 
Hello, I have logs coming in with the host showing as the UF.  I want to replace the host value with some event data. Here is a sample of the data.  ACME-001 HOST-003: status="407/0" srcip="1.... See more...
Hello, I have logs coming in with the host showing as the UF.  I want to replace the host value with some event data. Here is a sample of the data.  ACME-001 HOST-003: status="407/0" srcip="1.0.0.2" user="VeroRivas" dhost="http://test_web.net/contents/content1.jpg?aa=bb&cc=dd" urlp="401" proto="HTTP/https" mtd="CONNECT" urlc="Movie" rep="2" mt="text/html" mlwr="-" app="-" bytes="001/0/0/3180" ua="Mozilla/5.0 (webOS/1.3; U; en-US) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/1.0 Safari/525.27.1 Desktop/1.0" lat="0/0/0/3" rule="rule1 ok" url="http://test_web.com/page3/c.jpg?ee=ff&gg=hh"  ACME-001 ops-sys-002: status="407/0" srcip="1.0.0.11" user="roisiningle" dhost="http://test_web.net/contents/content1.jpg?aa=bb&cc=dd" urlp="401" proto="HTTP/https" mtd="CONNECT" urlc="Food" rep="-2" mt="text/html" mlwr="-" app="-" bytes="206/0/0/0040" ua="Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:14.0) Gecko/20100101 Firefox/14.0.1" lat="0/0/0/1" rule="rule1 ok" url="http://test_web.com/page5/e.jpg?ee=ff&gg=hh"  ACME-001 BUSDEV-005: status="200/0" srcip="1.0.0.13" user="roonixr" dhost="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" urlp="10" proto="HTTP/http" mtd="GET" urlc="Advertisement" rep="-3" mt="application/javascript" mlwr="-" app="-" bytes="142/020/032/023" ua="Mozilla/5.0 (X11; U; SunOS sun4m; en-US; rv:1.4b) Gecko/20030517 Mozilla Firebird/0.6" lat="0/05/30/53" rule="rule8 good" url="http://test_web.net/users/user2.jpg?ee=ff&gg=hh" ACME-001 is what I want to be used for the for the value of host. I am in a index cluster environment with 1 SH, CM, 2 IDX and 1 UF. I have pushed these props and transforms to the indexers with no success. The UF is still showing as the host value.  Props [mcafee:wg:kv] TRANSFORMS-changehost = changehost SHOULD_LINEMERGE = false DATETIME_CONFIG = current #TIME_PREFIX = #TIME_FORMAT = SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) #MAX_TIMESTAMP_LOOKAHEAD = TRUNCATE = 999999 EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) Transforms [changehost] DEST_KEY = MetaData:Host REGEX = ^(?P<host>\S+) FORMAT = host::$1 Any help would be much appreciated
Our team looks after 7 applications, we have 5 environments and each application sits on between 2 and 4 servers, depending on the environment. Each app instance has its own dedicated server, so i... See more...
Our team looks after 7 applications, we have 5 environments and each application sits on between 2 and 4 servers, depending on the environment. Each app instance has its own dedicated server, so in other words, given a hostname, you can figure out exactly which application and which environment it is for.   At the moment, if we want to search for the logs of one of the applications (app1) in UAT, and if this app has 4 servers in UAT, the only way we can do this is by using the following search parameters   source=*app1.log host=host1 OR host=host2 OR host=host3 OR host=host4   Sometimes we have a few different applications talking to each other, so we end up having to mention a long list of host names and this gets quite tedious.  We have a separate team that manages Splunk across the organisation.   Is there something we could be asking the Splunk team to do for us to make our searching easier? Is there something they could do that would result in us being able to do something like application=app1 environment=uat    instead of having to specify host names for the environment that we are interested in?   Our team would appreciate any suggestions that can make our work easier.   Thank you    
@gcusello  Thank you  I looked at your post as saw your_search | eval date=strftime(_time,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] | ... in this way you exclude all the... See more...
@gcusello  Thank you  I looked at your post as saw your_search | eval date=strftime(_time,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] | ... in this way you exclude all the events in the days contained in the lookup. So now the question is  I am using this lookup file to say  Do not alert on these dates in the lookup but we need to +1 day on them so lets say the lookup table is   2025-02-17th We would need to add 1 day to it so now its actually Muting on the 18th if that make sense? to simplify, the lookup table Dates we just need to +1 day  to it and make sure on those dates, we just mute alert would it look like this? your_search | eval date=strftime(_time + 86400,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] |... Also is there a difference using from inputlookup vs lookup? All the best!
hi any update on this from anyone ? Thank you!
I believe this only applies to how Splunk Web (UI) interacts with splunkd not how direct REST API calls are made to splunkd on port 8089.  I am trying to determine if I should just use a client si... See more...
I believe this only applies to how Splunk Web (UI) interacts with splunkd not how direct REST API calls are made to splunkd on port 8089.  I am trying to determine if I should just use a client side timeout for the endpoint call.
@prasanthkota did you get this working? I am working on a custom function to convert the splunk query result in the vaultid to csv and would like to know if there is one that already exists for this?... See more...
@prasanthkota did you get this working? I am working on a custom function to convert the splunk query result in the vaultid to csv and would like to know if there is one that already exists for this? what was your end custom function?
Thats right - Users in the support portal are completely separate to your actual Splunk Instance and are not automatically setup - there is no link between them. Infact you can have users on your Sup... See more...
Thats right - Users in the support portal are completely separate to your actual Splunk Instance and are not automatically setup - there is no link between them. Infact you can have users on your Support portal which do not have a login to your Splunk instance if appropriate.  
Thanks for the help on this. The final solution for me is the  [general] parallelIngestionPipelines = 200 I am not sure I see the benefit of taking the time to find the optimal size for the vario... See more...
Thanks for the help on this. The final solution for me is the  [general] parallelIngestionPipelines = 200 I am not sure I see the benefit of taking the time to find the optimal size for the various Queues a you suggest.  I have the available CPU and memory to simply increase the pipelines.  I will be adding several IFs and allow them to load balance and ultimately 200 will be way over kill and I may drop this back to something like 50 (or maybe I will not bother with this either since everything is working)  
Hi. We started using Splunk Observability Cloud for our Azure infrazrtucture. We already setup Azure integration with Splunk and now in the process of creation dashboards and charts. I met an issue... See more...
Hi. We started using Splunk Observability Cloud for our Azure infrazrtucture. We already setup Azure integration with Splunk and now in the process of creation dashboards and charts. I met an issue while creating any chart that contains several objects in it. If I want to see dtu_consumption_percent for SQL databases on my SQL Server (tens of databases) I can easily create a time chart, and it contains data for all databases, but I do not understand what line represents what database. Because the name of any database looks like /SUBSCRIPTIONS/FULL_ID_OF_SUBSCRIPRTION/RESOURCEGROUPS/RESOURCE_GROUP_NAME/PROVIDERS/MICROSOFT.SQL/SERVERS/THE_NAME_OF_SQL_SERVER/DATABASES/THE_NAME_OF_DATABASE I do not see this full name even on legend. And hovering the mouse over the line shows only a small part, like  "/SUBSCRIPTIONS/FULL_ID_OF_SU"   I would like to have THE_NAME_OF_DATABASE aka resource name instead of full azure_resource_id.   Is it possible? Thank You
Hello @daniedoe  /services/saved/searches Fetches saved searches globally across all apps the authenticated user has access to and results depend on the permissions of the user making the API call.... See more...
Hello @daniedoe  /services/saved/searches Fetches saved searches globally across all apps the authenticated user has access to and results depend on the permissions of the user making the API call. If you need a broader scope across multiple apps or want results influenced by user permissions, use this namespace. /servicesNS/nobody/SplunkEnterpriseSecuritySuite/saved/searches Specifically queries saved searches within the SplunkEnterpriseSecuritySuite app and uses the nobody namespace, meaning searches owned by nobody (i.e., shared objects) in that app. If your application is specifically working within Splunk Enterprise Security and only needs correlation searches from ES, prefer this namespace. Hopefully this helps. Have a nice day,  
Hello Maybe you need to edit /opt/splunk/etc/system/local/web.conf adding the stanza: splunkdConnectionTimeout = 120 More HERE.
In a production application what factors should I consider deciding between using /services/saved/searches vs /servicesNS/nobody/SplunkEnterpriseSecuritySuite/saved/searches for a REST HTTP endpoint ... See more...
In a production application what factors should I consider deciding between using /services/saved/searches vs /servicesNS/nobody/SplunkEnterpriseSecuritySuite/saved/searches for a REST HTTP endpoint call to get correlation search information. Both return the same results for me.
Following on 'by configuration by an existing portal admin in your organisation.'   1.Now I understand that there is Admin role for the Splunk Support portal and I believe it's different from roles... See more...
Following on 'by configuration by an existing portal admin in your organisation.'   1.Now I understand that there is Admin role for the Splunk Support portal and I believe it's different from roles for the Splunk instance. Right?  2.A Splunk enterprise instance admin need not inherently be a Splunk portal admin too. Right? They should be given access to the Support Portal by the support portal admin or Splunk team upon request of the Splunk portal admin. Right?
Hi @harryvdtol  Have a look at https://community.splunk.com/t5/Splunk-Dev/Link-url-in-splunk-dashboard/m-p/688493 which I think should give you some insight into how to achieve this. Please let me ... See more...
Hi @harryvdtol  Have a look at https://community.splunk.com/t5/Splunk-Dev/Link-url-in-splunk-dashboard/m-p/688493 which I think should give you some insight into how to achieve this. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I want to know if there is any server-side timeout that a response must complete in for a call to the endpoint Get /services/saved/searches. Does Splunk have a default timeout for handling these API ... See more...
I want to know if there is any server-side timeout that a response must complete in for a call to the endpoint Get /services/saved/searches. Does Splunk have a default timeout for handling these API calls?