All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Huy.Nguyen, First off, cool profile pic! Second, I found this AppD Docs page that could be helpful on how to uninstall a Smart Agent with Command line. https://docs.appdynamics.com/appd/24.x... See more...
Hi @Huy.Nguyen, First off, cool profile pic! Second, I found this AppD Docs page that could be helpful on how to uninstall a Smart Agent with Command line. https://docs.appdynamics.com/appd/24.x/24.8/en/agent-management/smart-agent/uninstall-smart-agent
Hi, Is the Dnslookup available in Splunk cloud like enterprise?
If you want to avoid using 2 streamstats you shall try this way, | streamstats count as Rank | delta Score as Diff | eval Rank=if(Diff=0,Rank-1,Rank) | fields - Diff And with 2 streamstats you sh... See more...
If you want to avoid using 2 streamstats you shall try this way, | streamstats count as Rank | delta Score as Diff | eval Rank=if(Diff=0,Rank-1,Rank) | fields - Diff And with 2 streamstats you shall try this so to avoid 1 extra filldown command, | streamstats count as Rank | streamstats window=2 range(Score) as range | eval Rank=if(Rank=1 OR range != 0, Rank, Rank-1)
Hi, What if we are unable to upgrade the controller (v24.4.1) to version 24.6 or higher at this time, is there a way to remove the inactive Smart Agent from the controller via the command line or fr... See more...
Hi, What if we are unable to upgrade the controller (v24.4.1) to version 24.6 or higher at this time, is there a way to remove the inactive Smart Agent from the controller via the command line or from the database?  Thanks!
Have you tried the lookup command https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Lookup  
I lessened the amount of fields in the csv and that solved the issue. 
Hi Team  Can you please help me to provide a solution to use a csv file with the external vs internal user id data in the splunk.  Below is the current query and output that extracts the internal... See more...
Hi Team  Can you please help me to provide a solution to use a csv file with the external vs internal user id data in the splunk.  Below is the current query and output that extracts the internal userid and i need another column to add corresponding external user id.  Csv file : ABC.csv usr_id,eml_add_ds internal user id 1 , external user id 1 internal user id 2 , external user id 2 internal user id 3 , external user id 3 internal user id 4 , external user id 4 Query : (index=ABC) ("Start" OR "Finish") Properties.AspNetCoreEnvironment="*" | rex field=Message "Start:\s*(?<start_info>[^\s]+)" | rex field=Message "user\s(?<Userid>[^\took|.]+)" | search start_info=* | table Userid | sort time   Output :   
It is difficult to tell without seeing your events and lookup file values but taking a clue from the field name "cidr", does your lookup file contain CIDR-style values, and if so, have you created a ... See more...
It is difficult to tell without seeing your events and lookup file values but taking a clue from the field name "cidr", does your lookup file contain CIDR-style values, and if so, have you created a lookup definition with advanced setting for match type CIDR(cidr)?
@richgalloway @jawahir007  Thank you both for the nice explanation.  As part of my migration activity, I want to clean up or remove all the unnecessary sourcetypes from Splunk so that we may use ... See more...
@richgalloway @jawahir007  Thank you both for the nice explanation.  As part of my migration activity, I want to clean up or remove all the unnecessary sourcetypes from Splunk so that we may use less disk space and move data more quickly from the old server to the new one. But as per your suggestion, delete command will never reduce disk space and in migration the entire data will have to be copied. Am I understanding it correctly ? Some more addition on my first ask. 1. All the sourcetypes coming from one source. 2. All the sourcetypes belongs to only one index. 3. We are using transforms and props to build the sourcetypes. When a particular type of pattern events comes; then transforms create the sourcetype( as mentioned regex inside ) 4. All the parsing and filtering will take care by python script. 5. Both unnecessary and necessary sourcetypes are included in that one index.   Thanks   
  l  
Finally got what I wanted. | inputlookup servers | dedup host | sort host | table host host ...this gives me the delineated list of servers that are individually selectable.  Now I need to add my... See more...
Finally got what I wanted. | inputlookup servers | dedup host | sort host | table host host ...this gives me the delineated list of servers that are individually selectable.  Now I need to add my parameter tokens to trim the list down based on the first 3 LOVs. Thanks all!
1.Using Delete Command  In Splunk, the delete command is used to mark events as deleted from search results. However, it does not physically remove the events from disk or from the index. Instead, i... See more...
1.Using Delete Command  In Splunk, the delete command is used to mark events as deleted from search results. However, it does not physically remove the events from disk or from the index. Instead, it hides the marked events so they are not returned in future search results. The events are still present in the index but flagged as deleted 2.  Permanently Delete Data via Index Cleanup (Retention Policies) To physically delete data from Splunk's indexes, you typically rely on index retention policies. Splunk automatically deletes older data based on index size or time-based retention policies. Set Index Retention Policies: Maximum Size (based on disk usage): Once the index exceeds a defined size, Splunk will delete the oldest data. Time-based Retention: Splunk can automatically remove data that is older than a specific period (e.g., data older than 30 days). Steps: Modify the indexes.conf file, located in $SPLUNK_HOME/etc/system/local/indexes.conf or within an app-specific folder. Example configuration for size- or time-based retention:   [your_index] maxTotalDataSizeMB = 5000 # Set the maximum size of the index in MB frozenTimePeriodInSecs = 2592000 # 30 days in seconds (30 * 24 * 60 * 60) maxTotalDataSizeMB: Sets the maximum disk space the index can use. When this limit is reached, older data is deleted. frozenTimePeriodInSecs: Specifies the number of seconds to retain the data. Once the data is older than this, it will be deleted. After the index reaches the size or time threshold, old data is deleted automatically by Splunk.
Individual sourcetypes cannot be deleted.  Data is deleted by the bucket, which is a subset of an index.  When a bucket is deleted, all events in that bucket are removed from the system. The delete ... See more...
Individual sourcetypes cannot be deleted.  Data is deleted by the bucket, which is a subset of an index.  When a bucket is deleted, all events in that bucket are removed from the system. The delete command does not delete data.  It merely hides it from view. There is no backend command to delete data. If you are fortunate, the undesired sourcetypes are the only ones in their respective indexes.  In that case you can set the frozenTimePeriodInSecs for the index(es) to 1 and wait for Splunk to delete the buckets in the index(es). If you are like most sites and have a mixture of sourcetypes in your indexes then it becomes more of a challenge.  One option: Copy the sourcetypes you wish to keep into a different index using the collect command.  This will impact your ingestion license. Set frozenTimePeriodInSecs on the original index to 1 and wait for buckets to be deleted.  This will delete everything in the index.  On-prem environments can use the clean CLI command to delete the index. Revert the frozenTimePeriodInSecs setting. Use the collect command to copy the desired data back to the original index.  This avoids having to change the queries that use that index name and will impact your ingestion license (again).  In an on-prem environment, you can rename the index to the original name. See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/RemovedatafromSplunk#Remove_all_data_from_one_or_all_indexes for more information.
I think I understand what you are asking about but without sample ingested data and the new output sample it is harder to decipher what is going wrong.
Hello, we configured rsyslog and it is now receiving logs from appliances, saves them locally to disk and send the copies to the remote destinations on client side. But we have now problems with in... See more...
Hello, we configured rsyslog and it is now receiving logs from appliances, saves them locally to disk and send the copies to the remote destinations on client side. But we have now problems with indexing, as far as data is not being received anymore from the HFs. I think the UFs are undersized to perform all of these activities. Is there a way to check if we have a performance problem now?   Thank you, Andrea
Hello - I am trying to construct a search whereby I can do a lookup of a single table, then rename the fields and change how they're displayed, however the lookup and eval commands don't seem to be w... See more...
Hello - I am trying to construct a search whereby I can do a lookup of a single table, then rename the fields and change how they're displayed, however the lookup and eval commands don't seem to be working as I would like.  The main search I am performing is basic, using some source subnets and then trying to have the lookup reference what area of the business they belong to, below is the lookup portion of my search: index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" | lookup file.csv cidr AS sourceip OUTPUT field_a AS sourceprovider, field_b AS sourcearea, field_c AS sourcezone , field_d AS sourceregion, cidr AS src_cidr | lookup file.csv cidr AS destip OUTPUT field_a AS destprovider, field_b AS destarea, field_c AS destzone, field_d AS destregion, cidr AS dest_cidr | fillnull value="none" | eval src_details_combined=sourceprovider."-".sourcearea."-".sourcezone ."-".sourceregion | eval dest_details_combined=destprovider."-".destarea."-".destzone."-".destregion | eval src_details_combined=IF(src_details_combined=="none-none-none-none","notfound",src_details_combined) | eval dest_details_combined=IF(dest_details_combined=="none-none-none-none","notfound",dest_details_combined) | stats count values(sourceip) as sourceip values(destip) as destip by src_details_combined, dest_details_combined, rule, dest_port, app | table src_details_combined, dest_details_combined, app, count   When I run the search I do get some results but the  src_details_combined and dest_details_combined fields always return as "notfound" - even though I know the IPs should match in the lookup csv.  Can anyone see where I have gone wrong in my search?
Hello Splunkers !! I hope all is well. There are some sourcetypes in splunk which are having large amount of data but we are not using those sourcetypes in any of the dashboards or saved searches... See more...
Hello Splunkers !! I hope all is well. There are some sourcetypes in splunk which are having large amount of data but we are not using those sourcetypes in any of the dashboards or saved searches. I want to delete those sourcetypes in splunk and I have some questions associated with the deletion of sourcetype as below. 1. What is the best approach to delete the sourcetypes data in splunk ( using the delete command or from backend ) 2. Does the deletion of historical data from those sourcetypes which impact the other useful sourcetype? 3. Does it impact on the corruption of the buckets ? 4. Unused sourcetypes is carrying millions of data. So what will be the fastest approach to delete the large historical data chunks ? Thanks in advance. Advice and suggestions are really appreciated !!
No option to wrap it - honestly you may want to replace your viz with Tables which can treat all of these as text and can wrap.   Showing a lot of information like you are there are options tha... See more...
No option to wrap it - honestly you may want to replace your viz with Tables which can treat all of these as text and can wrap.   Showing a lot of information like you are there are options that a table will provide that may fit better.
Hello, I am responsible for providing self-signed SSL certificates for Splunk servers. Could you guide me, considering that I am working in a distributed architecture consisting of SH, 2 INDEXERS ... See more...
Hello, I am responsible for providing self-signed SSL certificates for Splunk servers. Could you guide me, considering that I am working in a distributed architecture consisting of SH, 2 INDEXERS plus a server handling DS, and forwarders? How many certificates will I need to generate, and do the forwarders also require SSL certificates? If possible, I would greatly appreciate it if you could provide any relevant documentation to assist me in this process. Best regards,
Can the Splunkbase app - "Splunk AI assistant for SPL", be installed on an on-prem deployment of Splunk enterprise, or is the app only for public cloud deployments of Splunk Enterprise.  If this is n... See more...
Can the Splunkbase app - "Splunk AI assistant for SPL", be installed on an on-prem deployment of Splunk enterprise, or is the app only for public cloud deployments of Splunk Enterprise.  If this is not for on-prem, are there plans to build and app for on-prem, and what are the current timelines.