All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There is no straightforward answer to such question. Firstly, let's jump to question 3. Can you search without specifying index. Well, yes and no. Yes, because you can issue the search command with... See more...
There is no straightforward answer to such question. Firstly, let's jump to question 3. Can you search without specifying index. Well, yes and no. Yes, because you can issue the search command without explicitly listing an index. But if you don't say which indexes you want searched Splunk will search through indexes set as default for your user's role. But the good practice is to _not_ give users default indexes (and most importantly don't define all indexes as default search indexes!) so that the search must specify them directly to avoid confusion and not mistakenly spawn heavy searches across too many indexes. So. 1) Yes, you can do index=* and if a user's role has only permissions for index=A and index=B, only those indexes will be searched. So technically you could do that. But it's a bit of a bad design. The same dashboard will behave differently for different users without any clear indication as to why it does so. Especially if it was to give some overall statistical parameters without explicitly listing the indexes involved. 2) Yes, searching across all indexes can cause performance issues (of course the search itself will be important but still having to browse through buckets from all indexes even if only to exclude them by bloomfilter can be a performance hit). 4) It all depends on what your "applicaiton" is. It's hard to give a good answer for such a general question. On the one hand - it's good to have separate dashboards for different audiences so that they can be - for example - customized if needed. But on the other hand it adds maintenance overhead. So the usual answer is "it depends".
The 50k results limit for subsearch applies only to join! The default limit for a subsearch is 10k results.
Hi @secure , as @PickleRick said, in the main search you cannot use a command as rex. You have two choices: move the rex after the main search: (index=serverdata sourcetype="server:stats") OR (in... See more...
Hi @secure , as @PickleRick said, in the main search you cannot use a command as rex. You have two choices: move the rex after the main search: (index=serverdata sourcetype="server:stats") OR (index="hostapp" source=hostDB_Table dataasset="*host_Data*") | rex "app_code=\"(?<application_code>[|w.\"]*)" or use append: (index=serverdata sourcetype="server:stats" | rex "app_code=\"(?<application_code>[|w.\"]*)" | append [ search index="hostapp" source=hostDB_Table dataasset="*host_Data*")] This second solution runs only if you have, in the secondary search, less than 50,000 results, for this reason I orefer the first one. In addition, there's a third solution that I prefer: if you create a fixed field extraction, using the regex, you don't need to insert it in the search and you can use only the main search: (index=serverdata sourcetype="server:stats") OR (index="hostapp" source=hostDB_Table dataasset="*host_Data*") Ciao. Giuseppe
Hi @Karthikeya , in general having a common dashboard for all applications depends on your requirements and on the fields of all applications so there isn't one answer based on best practices, becau... See more...
Hi @Karthikeya , in general having a common dashboard for all applications depends on your requirements and on the fields of all applications so there isn't one answer based on best practices, because the rules are your requirements: if all applications have the same fields you can have one dashboard, if they have different fields, the dashboard could be few readable and I'd prefer different dashboards. Anyway, answering to your questions: 1. Can we create common dashboard for all applications (nearly 200+ indexes are there) by giving index=* in base search. My question is we have A to Z indexes but User A has access to only A index. Here if user A gives index=* will Splunk look for A to Z indexes or only A index which they have access to. (because I am afraid that splunk resource wastage.) At first, having more than 200 indexes isn't a best practice because it's very difficoult to manage and use them: you should use different indexes when you need different retention policies and/or different access grants. About the user, when a user runs index=*, it sees only the indexes granted for him/her. In addition, I don't like index=* in searches, find a rule to limit them. 2. We have seperate role called test engineer who has access to all indexes (A to Z). Is this a good idea to have common dashboard for all because if engineer loads the data all indexes will be loaded which in return cause performance issue for users? As I said, I don't like a search index=* even if the user can access all indexes, and anyway to see more than 200 indexes is really difficoult! Anyway, I'd limit the number of indexes, grouping also different logs in the same index (an index isn't a database table, it can contain different and etherogenous logs) with the same retention and grants rules. In addition I suppose that you applications are different and they have different fields and informations, so I suppose that it's difficoult that it's possible, using the same dashboard, display all of them for all applications! 3. We have app_name in place. Can I exclude index=* in base search and give app_name = "*app_name*" and app_name is dropdown... so by default * will not be given. Once user selects app_name dashboard will be populated? In general, using an asterisk at the befinning of a search isn't a best practice, you could create an input using a lookup containing al the apps and select events based on the selected value. The lookup can be automatically updated using a scheduled search that runs e.g. every night. 4. Or having separate dashboard for separate applications work? But the ask for them is to have common dashboard. Not sure is this a good practice? It's a best practice to try to reduce the number of dashboards, but probably only one isn't the most efficient way to display your data! Try to define some grouping rules, e.g. application of the same scope or with the same informations or for the same role and create few dashboards, one for each group. Ciao. Giuseppe
Are you referring to the indexers for S2S forwarding, or something else such as HEC, UI or REST API access? If you are looking for your indexer IPs then you may be able to resolve the DNS names in t... See more...
Are you referring to the indexers for S2S forwarding, or something else such as HEC, UI or REST API access? If you are looking for your indexer IPs then you may be able to resolve the DNS names in the outputs.conf file as @gcusello suggested and then deduplicate the results, however be aware that these IPs can change if Splunk scales the number of Indexers in operation within your stack or if any indexers require rebuilding. Other than rare occasions where SH are rebuilt, these are generally fixed IPs which you wouldnt expect to change often. Looking in your _internal index you can find a list of hosts in the format sh*.splunkcloud.com which you can resolve to provide your list of SH IP addresses for REST access if required.
Hi @Sec-Bolognese  Ive achieved this before using the AWS Cloudwatch agent, as the others have mentioned this isnt really something you can do with the Splunk Universal Forwarder. Step 1: Set Up IA... See more...
Hi @Sec-Bolognese  Ive achieved this before using the AWS Cloudwatch agent, as the others have mentioned this isnt really something you can do with the Splunk Universal Forwarder. Step 1: Set Up IAM Permissions for Cloudwatch Agent if not already in place. Create (or use an existing) IAM role that has permissions for CloudWatch Logs. Ensure the role includes at least these actions: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents logs:DescribeLogStreams If using EC2, attach the IAM role to your instance. Otherwise, provide credentials that have the above permissions. Step 2: Install the CloudWatch Agent For Amazon Linux, RHEL, or CentOS: sudo yum update -y sudo yum install -y amazon-cloudwatch-agent For Ubuntu or Debian: sudo apt-get update sudo apt-get install -y amazon-cloudwatch-agent (Alternatively, you can download the package directly from AWS if needed.) Step 3: Create the CloudWatch Agent Configuration File Create a file at /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json. Paste the following content into the file:   { "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/splunkforwarder/var/log/*", "log_group_name": "splunkforwarder-logs", "log_stream_name": "{instance_id}", "timestamp_format": "%m-%d-%Y %H:%M:%S.%f %z" } ] } } } }   Note: Adjust "file_path" if you need a more specific file pattern (e.g., "/opt/splunkforwarder/var/log/*.log"). "log_group_name" is the CloudWatch Logs group that will be used. If it doesn’t exist, the agent can create it (given sufficient permissions). "log_stream_name" uses {instance_id} as a placeholder. You can change this if desired. If your logs do not contain timestamps in the specified format, adjust or remove the "timestamp_format" setting. (Optional) You can also run the configuration wizard: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard Answer the prompts to generate a configuration file interactively. Step 4: Start the CloudWatch Agent with Your Configuration Run the following command to start the agent using your configuration file: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s In the above: -a fetch-config tells the agent to fetch the configuration. -m ec2 indicates that the agent is running on an EC2 instance (use -m onPremise if running elsewhere). -c file:... specifies the path to your configuration file. -s starts the agent. Step 5: Verify That Logs Are Being Sent Check the CloudWatch Agent log file to ensure it started correctly: sudo tail -f /opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log Log in to the AWS Console and navigate to CloudWatch - Logs. Look for the log group "splunkforwarder-logs" and verify that log streams and log events are appearing. This should then allow you to send logs from /opt/splunkforwarder/var/log to CloudWatch Logs as well as your Splunk Cloud instance as required. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @Andre_ , ok, use your script, the locis seems to be correct. Are you sure that it isn't possible to define a rule for IIS logs? it seems very strange that your IIS are distributed without rule... See more...
Hi @Andre_ , ok, use your script, the locis seems to be correct. Are you sure that it isn't possible to define a rule for IIS logs? it seems very strange that your IIS are distributed without rules in all the filesystem, I suppose that they are in a predefined location and you could start from that location for your ingestion. Ciao. Giuseppe
Hi @Ben , because, using Inheritance, the new role inherits all permissions from the other one. For example if you create a role inheriting from user, which accesses all non-internal indexes, the n... See more...
Hi @Ben , because, using Inheritance, the new role inherits all permissions from the other one. For example if you create a role inheriting from user, which accesses all non-internal indexes, the new role will have access to all non-internal indexes. Ciao. Giuseppe
We are not s plunk partner. We are customer.So we need the contact details of our S plunk Cloud Account Manager to update our internal records. Could you please provide us with the name, email, and c... See more...
We are not s plunk partner. We are customer.So we need the contact details of our S plunk Cloud Account Manager to update our internal records. Could you please provide us with the name, email, and contact information of our assigned account manager?
Hi @gcusello  I will look at the APP permissions. About your question, I created the role from scratch. Thanks for the tip but why shouldn't I use Inheritance as a general guideline?    Thanks fo... See more...
Hi @gcusello  I will look at the APP permissions. About your question, I created the role from scratch. Thanks for the tip but why shouldn't I use Inheritance as a general guideline?    Thanks for replying Ben
Hello all, We have a requirement to have a common dashboard for all applications. For a application we have max 2 indexes (one for non-prod env FQDNs and one for prod env FQDNs) and users are restri... See more...
Hello all, We have a requirement to have a common dashboard for all applications. For a application we have max 2 indexes (one for non-prod env FQDNs and one for prod env FQDNs) and users are restricted based on index. My doubt is -  1. Can we create common dashboard for all applications (nearly 200+ indexes are there) by giving index=* in base search. My question is we have A to Z indexes but User A has access to only A index. Here if user A gives index=* will Splunk look for A to Z indexes or only A index which they have access to. (because I am afraid that splunk resource wastage.) 2. We have seperate role called test engineer who has access to all indexes (A to Z). Is this a good idea to have common dashboard for all because if engineer loads the data all indexes will be loaded which in return cause performance issue for users? 3. We have app_name in place. Can I exclude index=* in base search and give app_name = "*app_name*" and app_name is dropdown... so by default * will not be given. Once user selects app_name dashboard will be populated? 4. Or having separate dashboard for separate applications work? But the ask for them is to have common dashboard. Not sure is this a good practice? Please enlighten me with your thoughts and the best approach.
Always, always describe your use case in terms of data.  Without data, it is very difficult for another person to understand what you are trying to achieve.  Let me give you a starter. Suppose your ... See more...
Always, always describe your use case in terms of data.  Without data, it is very difficult for another person to understand what you are trying to achieve.  Let me give you a starter. Suppose your raw data is A B C _time 1 38 299 2025-02-04 23:59:00 2 36 296 2025-02-04 23:56:00 3 34 291 2025-02-04 23:51:00 4 32 284 2025-02-04 23:44:00 5 30 275 2025-02-04 23:35:00 6 28 264 2025-02-04 23:24:00 7 26 251 2025-02-04 23:11:00 8 24 236 2025-02-04 22:56:00 9 22 219 2025-02-04 22:39:00 10 20 200 2025-02-04 22:20:00 11 18 179 2025-02-04 21:59:00 12 16 156 2025-02-04 21:36:00 13 14 131 2025-02-04 21:11:00 14 12 104 2025-02-04 20:44:00 15 10 75 2025-02-04 20:15:00 16 8 44 2025-02-04 19:44:00 17 6 11 2025-02-04 19:11:00 18 4 -24 2025-02-04 18:36:00 19 2 -61 2025-02-04 17:59:00 20 0 -100 2025-02-04 17:20:00 21 -2 -141 2025-02-04 16:39:00 22 -4 -184 2025-02-04 15:56:00 23 -6 -229 2025-02-04 15:11:00 24 -8 -276 2025-02-04 14:24:00 25 -10 -325 2025-02-04 13:35:00 This mock sequence spans roughly three 4-hour intervals.  Now, if you bucket the sequence into 4-hour bins,   | bin _time span=4h@h   You get A B C _time 1 38 299 2025-02-04 20:00 2 36 296 2025-02-04 20:00 3 34 291 2025-02-04 20:00 4 32 284 2025-02-04 20:00 5 30 275 2025-02-04 20:00 6 28 264 2025-02-04 20:00 7 26 251 2025-02-04 20:00 8 24 236 2025-02-04 20:00 9 22 219 2025-02-04 20:00 10 20 200 2025-02-04 20:00 11 18 179 2025-02-04 20:00 12 16 156 2025-02-04 20:00 13 14 131 2025-02-04 20:00 14 12 104 2025-02-04 20:00 15 10 75 2025-02-04 20:00 16 8 44 2025-02-04 16:00 17 6 11 2025-02-04 16:00 18 4 -24 2025-02-04 16:00 19 2 -61 2025-02-04 16:00 20 0 -100 2025-02-04 16:00 21 -2 -141 2025-02-04 16:00 22 -4 -184 2025-02-04 12:00 23 -6 -229 2025-02-04 12:00 24 -8 -276 2025-02-04 12:00 25 -10 -325 2025-02-04 12:00 If you do stats/timechart on this, you get what you get. Now, what do you mean by "adjust the starting point of the spans"?  What will the bucketed sequence look like?  Give a concrete example using this dataset. You can reproduce the above sequence using the following code:   | makeresults count=25 | streamstats count as A | eval _time = strptime("2025-02-05", "%F") - 60 * A * A | eval B = 40 - 2 * A, C = 300 - A * A | bin _time span=4h@h    
As @PickleRick says, do not use append unless absolutely necessary.  I'd suggest a direct expression to do what you want: index=index1 OR (index=index2 sourcetype="api") | eval EventId = coalesce(Ev... See more...
As @PickleRick says, do not use append unless absolutely necessary.  I'd suggest a direct expression to do what you want: index=index1 OR (index=index2 sourcetype="api") | eval EventId = coalesce(EventId, Number__c) | stats count values(index) as indices by EventId | search count < 2 indices = index1  
Thanks all of you for your help. I uninstalled and reinstalled splunk enterprise and realized it was splunk forwarder that was giving me the issue. Apparently, i used a wrong port for the receiving i... See more...
Thanks all of you for your help. I uninstalled and reinstalled splunk enterprise and realized it was splunk forwarder that was giving me the issue. Apparently, i used a wrong port for the receiving indexer. Thanks again.
I have created a custom extension that captures the status of the Scheduled Job (e.g., Ready, Running, Others) and sends the data to AppDynamics as 1, 2, etc. respectively. Do we have any feasibilit... See more...
I have created a custom extension that captures the status of the Scheduled Job (e.g., Ready, Running, Others) and sends the data to AppDynamics as 1, 2, etc. respectively. Do we have any feasibility of configuring Health Rules to accommodate the following conditions: TaskName Triggers Failure_PushData_ToWellnessCore_CRON Start at 12 AM and execute every 3 Hrs PushToWellness fromDbtoConsoleApp Start at 5 PM and execute every 3 Hrs Wellnessdatacron starts at 12.01Pm and execute every 1 hr WellnessFailureCRON 9am, 12am, 3pm, 6pm, 10pm, 1am, 5 am NoiseDataSyncCron Start at 11Am and execute every 1hr NoiseWebhookProcessor Start at 11Am and execute every 2hrs I tried configuring Cron with Start time as 0 17/3 * * * and End time 59 23 30 4 * as to accommodate "Start at 5 PM and execute every 3 Hrs " condition as a Health rule schedule but I am getting error as Error creating schedule: Test. Cause: Unexpected end of expression.  Can anyone help me with this?
How can I migrate SmartStore's local storage to a new storage device with no interruption to search and indexing functionality? Could it be as simple as updating homePath one index at a time, restar... See more...
How can I migrate SmartStore's local storage to a new storage device with no interruption to search and indexing functionality? Could it be as simple as updating homePath one index at a time, restarting the indexers, and allowing the cache manager to do the rest? 
Thank you for your response. I'm using a standalone splunk ui toolkit service(https://splunkui.splunk.com/Toolkits). It's linked within splunk app. So splunk ui app folder is added in Splunk/etc/a... See more...
Thank you for your response. I'm using a standalone splunk ui toolkit service(https://splunkui.splunk.com/Toolkits). It's linked within splunk app. So splunk ui app folder is added in Splunk/etc/apps.   I added a page as using the npx splunk/create commend. A page is main menu. and I wanna add sub menu. so I used react-router nested routing. but, This issue occurs When refreshing the page.   Can I make the sub-page with nested routing be recognized by the splunk as well? Or should I use the method of creating pages for individual submenus as well?   I used routing because creating a new page takes a lot of time to load the page. 
Firstly join is almost never a solution to a Splunk problem.  Secondly, you do not have Column1 as an output of your tstats search, so how can it match up Col1+Col2 with the start/end times. Genera... See more...
Firstly join is almost never a solution to a Splunk problem.  Secondly, you do not have Column1 as an output of your tstats search, so how can it match up Col1+Col2 with the start/end times. Generally if you want to enrich the start/end times with info from a lookup, you would run the tstats, then lookup the common fields (Column1)  from the lookup and output the other fields (Column2) If you want to end up with all the rows from the lookup in the output, even where there is no data for some of the rows, you would then do a final couple of commands, i.e. | inputlookup File.csv append=t | stats values(*) as * by Column1 which would then give you all the rows from the lookup and start/end times from data for Column1 found in the tstats search.
Technically you could use both base searches, but it's a bit fiddly and isn't really going to save you anything as the searches have to run anyway. You would get the job ids of each base search and t... See more...
Technically you could use both base searches, but it's a bit fiddly and isn't really going to save you anything as the searches have to run anyway. You would get the job ids of each base search and then in your panel search you would use loadjob to load each of the jobs.  However, you're still going to have to load the second job in some kind of subsearch (join?) so not sure where you're trying to go with this. If you are simply trying to speed up a join search, you can't achieve this with two base searches, as you are simply not changing anything and it will take the time it takes. The solution for a poor performing search using join is to remove the use of join and rewrite the search in another way. Looking at your existing searches I'm not sure why you are trying to combine these in the first place, because you have appcode in your first search and you simply want appcode to get the list of details from the lookup. You are doing a lookup in the primary search but doing nothing with the retrieved data. Why don't you just do the lookup in your primary search after the chart, i.e. index=serverdata | rex "host_name=\"(?&lt;server_host_name&gt;[^\"]*)" |chart dc(host_name) over appcode by host_environment | eval TOTAL_servers=DEV+PAT+PROD | table appcode DEV PAT PROD TOTAL_servers | lookup servers_businessgroup_appcode.csv appcode output Business_Group as New_Business_Group