All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ws , Splunk indexes all new data with the only exception when all the first 256 chars of the event are the same. Then (after indexing) you can dedup results eventually excluding duplicated data... See more...
Hi @ws , Splunk indexes all new data with the only exception when all the first 256 chars of the event are the same. Then (after indexing) you can dedup results eventually excluding duplicated data from results based on your requirements. Deduping is usually done related to one or more fields; it's also possible to search full duplicated deduping for _raw. Ciao. Giuseppe
Hi @NWC  Unfortunately the "Qualys Technology Add-on (TA) for Splunk" is not supported/compatible with Splunk Cloud. When you go to the the Splunkbase page (https://splunkbase.splunk.com/app/2964) ... See more...
Hi @NWC  Unfortunately the "Qualys Technology Add-on (TA) for Splunk" is not supported/compatible with Splunk Cloud. When you go to the the Splunkbase page (https://splunkbase.splunk.com/app/2964) and click on the "Version History" tab - it shows the compatibility. In order for it to be installed on Splunk Cloud it needs to be listed in the compatibility cell for that version. Unfortunately as this isnt cloud compatible you will not be able to install it on your Splunk Cloud instance. You might want to consider contacting Qualys to see if they update the app to make it Splunk Cloud compatible. Note; When installing apps on Splunk Cloud, the system checks the app ID against apps which are held in Splunkbase - if the app with the same ID exists in Splunkbase then it will suggest installing it via the App Browser page. Obviously this is only possible if the app is cloud compatible. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
How about using the "top" command, something like this? index=_internal group=per_index_thruput series=* | top 10 host  
Hi @hansmaldonado  The easiest thing might be to push an update out via the Cluster Manager to point to the new home path, however this will ultimately mean that you have zero cache and subsequent s... See more...
Hi @hansmaldonado  The easiest thing might be to push an update out via the Cluster Manager to point to the new home path, however this will ultimately mean that you have zero cache and subsequent searches may be slow whilst the cache re-populates.  If you wanted to retain the cached data to prevent this then I think this may be possible, depending on your configuration/architecture. While in maintenance mode, shutdown one indexer at a time, move the cached files from the existing location to the home path and then update the indexes.conf to reflect the new path. Once you start the indexer back up you will have the original cache files locally on that indexer but in the new location.  You will then need to do this for each indexer. This isnt necessarily the ideal way to do it but will mean that you do not need to re-download cached data. This will also mean that you indexes.conf will vary between indexers until you have completed. Once complete you should push out an updated indexes.conf via the CM with the updated settings so that you arent in a position where it could revert back!  I would recommend trying this approach in a development environment first to ensure you are happy with the process involved.  Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Currently using an customize App to connect to a case / monitoring system and retrieve data. I found out that, Splunk has the ability to detect if the data has already been indexed.  But if I have ... See more...
Currently using an customize App to connect to a case / monitoring system and retrieve data. I found out that, Splunk has the ability to detect if the data has already been indexed.  But if I have the following scenario? will it consider as a duplicate or new data? since it has a new close case timing for the update close case. One of the previously closed cases has been reopened and closed again with a new case closed time. will Splunk enterprise consider as a new data to index?
Thanks, I will keep it in mind.
Hi @Ben , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
indexes.conf [volume:hot] path=/mnt/splunk/hot maxVolumeDataSizeMB = 40 [volume:cold] path = /mnt/splunk/cold maxVolumeDataSizeMB = 40 [A] homePath = volume:hot/A/db coldPath = volume:cold/A/coldd... See more...
indexes.conf [volume:hot] path=/mnt/splunk/hot maxVolumeDataSizeMB = 40 [volume:cold] path = /mnt/splunk/cold maxVolumeDataSizeMB = 40 [A] homePath = volume:hot/A/db coldPath = volume:cold/A/colddb maxDataSize = 1 maxTotalDataSizeMB = 90 thawedPath = $SPLUNK_DB/A/thaweddb [_internal] homePath = volume:cold/_internaldb/db coldPath = volume:cold/_internaldb/colddb thawedPath = $SPLUNK_DB/_internaldb/thaweddb maxDataSize = 1 maxTotalDataSizeMB = 90 I collected data from each index, and the percentage stored in cold volume was A=30MB, _internaldb/db=10MB. This was understood to account for a larger percentage because the data volume and collection speed of the A index was larger and faster than that of _internal collection. If you stop collecting data from the A index and maintain data collection only for the _internal index, the old buckets in _internaldb/db will be moved to _internaldb/colddb in the order they were loaded in _internaldb/db, and will not be maintained in colddb in the order in which they were loaded in _internaldb/db, but will be immediately deleted. Additionally, data that existed in A/colddb is deleted in oldest order. I understood that the cold volume is limited to 40 and the cold volume is already full, so it will not be maintained in _internaldb/colddb and will be immediately deleted. However, why is the data in A/colddb deleted? Afterwards, when the A/colddb capacity reaches 20, A/colddb is not deleted. The behavior I expected was that if A/colddb capacity is deleted until it becomes 0, the old buckets in _internaldb/db would be moved to _internaldb/colddb and then maintained. I'm curious why the results are different from what I expected, and if maxTotalDataSizeMB is the same, the Volume maintains the same ratio.
Thanks again
What can I say? The aligntime option works for me. Like index=firewall earliest=-15m | timechart count span=1m aligntime=30
1. This is not Splunk Support. This is a volunteer-powered users community. 2. We have no knowledge of who you are or where you are let alone who your Account Manager is. 3. As you are a customer, ... See more...
1. This is not Splunk Support. This is a volunteer-powered users community. 2. We have no knowledge of who you are or where you are let alone who your Account Manager is. 3. As you are a customer, you should have someone with whom you've dealt before. If you can't find it, try to use either Sales (preferably) or Support contact for your location - https://www.splunk.com/en_us/about-splunk/contact-us.html
I have a deja vu. I think I answered the same question recently. But to the point. 1) There is no way to create an inputs with a dynamic definition using just Splunk built-in mechanisms. 2) It's h... See more...
I have a deja vu. I think I answered the same question recently. But to the point. 1) There is no way to create an inputs with a dynamic definition using just Splunk built-in mechanisms. 2) It's hard to believe that you have a decently sized environment without any standarization. If you do, I strongly advise to get it cleaned up because otherwise it will bite you in the most inconvenient place at the most inconvenient time. 3) A very ugly way to try to go around it could be to define an "input" running your script which would generate inputs.conf dynamically but this would require bending over backwards to handle forwarder restarts. I would very strongly (as opposed to just "strongly" from previous point) advise against it.
Thank you for the replay
There is no straightforward answer to such question. Firstly, let's jump to question 3. Can you search without specifying index. Well, yes and no. Yes, because you can issue the search command with... See more...
There is no straightforward answer to such question. Firstly, let's jump to question 3. Can you search without specifying index. Well, yes and no. Yes, because you can issue the search command without explicitly listing an index. But if you don't say which indexes you want searched Splunk will search through indexes set as default for your user's role. But the good practice is to _not_ give users default indexes (and most importantly don't define all indexes as default search indexes!) so that the search must specify them directly to avoid confusion and not mistakenly spawn heavy searches across too many indexes. So. 1) Yes, you can do index=* and if a user's role has only permissions for index=A and index=B, only those indexes will be searched. So technically you could do that. But it's a bit of a bad design. The same dashboard will behave differently for different users without any clear indication as to why it does so. Especially if it was to give some overall statistical parameters without explicitly listing the indexes involved. 2) Yes, searching across all indexes can cause performance issues (of course the search itself will be important but still having to browse through buckets from all indexes even if only to exclude them by bloomfilter can be a performance hit). 4) It all depends on what your "applicaiton" is. It's hard to give a good answer for such a general question. On the one hand - it's good to have separate dashboards for different audiences so that they can be - for example - customized if needed. But on the other hand it adds maintenance overhead. So the usual answer is "it depends".
The 50k results limit for subsearch applies only to join! The default limit for a subsearch is 10k results.
Hi @secure , as @PickleRick said, in the main search you cannot use a command as rex. You have two choices: move the rex after the main search: (index=serverdata sourcetype="server:stats") OR (in... See more...
Hi @secure , as @PickleRick said, in the main search you cannot use a command as rex. You have two choices: move the rex after the main search: (index=serverdata sourcetype="server:stats") OR (index="hostapp" source=hostDB_Table dataasset="*host_Data*") | rex "app_code=\"(?<application_code>[|w.\"]*)" or use append: (index=serverdata sourcetype="server:stats" | rex "app_code=\"(?<application_code>[|w.\"]*)" | append [ search index="hostapp" source=hostDB_Table dataasset="*host_Data*")] This second solution runs only if you have, in the secondary search, less than 50,000 results, for this reason I orefer the first one. In addition, there's a third solution that I prefer: if you create a fixed field extraction, using the regex, you don't need to insert it in the search and you can use only the main search: (index=serverdata sourcetype="server:stats") OR (index="hostapp" source=hostDB_Table dataasset="*host_Data*") Ciao. Giuseppe
Hi @Karthikeya , in general having a common dashboard for all applications depends on your requirements and on the fields of all applications so there isn't one answer based on best practices, becau... See more...
Hi @Karthikeya , in general having a common dashboard for all applications depends on your requirements and on the fields of all applications so there isn't one answer based on best practices, because the rules are your requirements: if all applications have the same fields you can have one dashboard, if they have different fields, the dashboard could be few readable and I'd prefer different dashboards. Anyway, answering to your questions: 1. Can we create common dashboard for all applications (nearly 200+ indexes are there) by giving index=* in base search. My question is we have A to Z indexes but User A has access to only A index. Here if user A gives index=* will Splunk look for A to Z indexes or only A index which they have access to. (because I am afraid that splunk resource wastage.) At first, having more than 200 indexes isn't a best practice because it's very difficoult to manage and use them: you should use different indexes when you need different retention policies and/or different access grants. About the user, when a user runs index=*, it sees only the indexes granted for him/her. In addition, I don't like index=* in searches, find a rule to limit them. 2. We have seperate role called test engineer who has access to all indexes (A to Z). Is this a good idea to have common dashboard for all because if engineer loads the data all indexes will be loaded which in return cause performance issue for users? As I said, I don't like a search index=* even if the user can access all indexes, and anyway to see more than 200 indexes is really difficoult! Anyway, I'd limit the number of indexes, grouping also different logs in the same index (an index isn't a database table, it can contain different and etherogenous logs) with the same retention and grants rules. In addition I suppose that you applications are different and they have different fields and informations, so I suppose that it's difficoult that it's possible, using the same dashboard, display all of them for all applications! 3. We have app_name in place. Can I exclude index=* in base search and give app_name = "*app_name*" and app_name is dropdown... so by default * will not be given. Once user selects app_name dashboard will be populated? In general, using an asterisk at the befinning of a search isn't a best practice, you could create an input using a lookup containing al the apps and select events based on the selected value. The lookup can be automatically updated using a scheduled search that runs e.g. every night. 4. Or having separate dashboard for separate applications work? But the ask for them is to have common dashboard. Not sure is this a good practice? It's a best practice to try to reduce the number of dashboards, but probably only one isn't the most efficient way to display your data! Try to define some grouping rules, e.g. application of the same scope or with the same informations or for the same role and create few dashboards, one for each group. Ciao. Giuseppe
Are you referring to the indexers for S2S forwarding, or something else such as HEC, UI or REST API access? If you are looking for your indexer IPs then you may be able to resolve the DNS names in t... See more...
Are you referring to the indexers for S2S forwarding, or something else such as HEC, UI or REST API access? If you are looking for your indexer IPs then you may be able to resolve the DNS names in the outputs.conf file as @gcusello suggested and then deduplicate the results, however be aware that these IPs can change if Splunk scales the number of Indexers in operation within your stack or if any indexers require rebuilding. Other than rare occasions where SH are rebuilt, these are generally fixed IPs which you wouldnt expect to change often. Looking in your _internal index you can find a list of hosts in the format sh*.splunkcloud.com which you can resolve to provide your list of SH IP addresses for REST access if required.
Hi @Sec-Bolognese  Ive achieved this before using the AWS Cloudwatch agent, as the others have mentioned this isnt really something you can do with the Splunk Universal Forwarder. Step 1: Set Up IA... See more...
Hi @Sec-Bolognese  Ive achieved this before using the AWS Cloudwatch agent, as the others have mentioned this isnt really something you can do with the Splunk Universal Forwarder. Step 1: Set Up IAM Permissions for Cloudwatch Agent if not already in place. Create (or use an existing) IAM role that has permissions for CloudWatch Logs. Ensure the role includes at least these actions: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents logs:DescribeLogStreams If using EC2, attach the IAM role to your instance. Otherwise, provide credentials that have the above permissions. Step 2: Install the CloudWatch Agent For Amazon Linux, RHEL, or CentOS: sudo yum update -y sudo yum install -y amazon-cloudwatch-agent For Ubuntu or Debian: sudo apt-get update sudo apt-get install -y amazon-cloudwatch-agent (Alternatively, you can download the package directly from AWS if needed.) Step 3: Create the CloudWatch Agent Configuration File Create a file at /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json. Paste the following content into the file:   { "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/splunkforwarder/var/log/*", "log_group_name": "splunkforwarder-logs", "log_stream_name": "{instance_id}", "timestamp_format": "%m-%d-%Y %H:%M:%S.%f %z" } ] } } } }   Note: Adjust "file_path" if you need a more specific file pattern (e.g., "/opt/splunkforwarder/var/log/*.log"). "log_group_name" is the CloudWatch Logs group that will be used. If it doesn’t exist, the agent can create it (given sufficient permissions). "log_stream_name" uses {instance_id} as a placeholder. You can change this if desired. If your logs do not contain timestamps in the specified format, adjust or remove the "timestamp_format" setting. (Optional) You can also run the configuration wizard: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard Answer the prompts to generate a configuration file interactively. Step 4: Start the CloudWatch Agent with Your Configuration Run the following command to start the agent using your configuration file: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s In the above: -a fetch-config tells the agent to fetch the configuration. -m ec2 indicates that the agent is running on an EC2 instance (use -m onPremise if running elsewhere). -c file:... specifies the path to your configuration file. -s starts the agent. Step 5: Verify That Logs Are Being Sent Check the CloudWatch Agent log file to ensure it started correctly: sudo tail -f /opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log Log in to the AWS Console and navigate to CloudWatch - Logs. Look for the log group "splunkforwarder-logs" and verify that log streams and log events are appearing. This should then allow you to send logs from /opt/splunkforwarder/var/log to CloudWatch Logs as well as your Splunk Cloud instance as required. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @Andre_ , ok, use your script, the locis seems to be correct. Are you sure that it isn't possible to define a rule for IIS logs? it seems very strange that your IIS are distributed without rule... See more...
Hi @Andre_ , ok, use your script, the locis seems to be correct. Are you sure that it isn't possible to define a rule for IIS logs? it seems very strange that your IIS are distributed without rules in all the filesystem, I suppose that they are in a predefined location and you could start from that location for your ingestion. Ciao. Giuseppe