All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, What location would we get on the geo dashboard if the user is using VPN to connect to the system to fetch the application? If the location is configured in VPN, can we get the actual locatio... See more...
Hi, What location would we get on the geo dashboard if the user is using VPN to connect to the system to fetch the application? If the location is configured in VPN, can we get the actual location if the user is using the application?
I have a table that looks like below and it shows how the app users are distributed across departments. There are several apps and several departments.   app | department | dep_headcount | avg_user... See more...
I have a table that looks like below and it shows how the app users are distributed across departments. There are several apps and several departments.   app | department | dep_headcount | avg_users_per_hr   Is there any way to visualize this so any data point in the chart will show  application: app_name departments: dep_name avg_users_per_hr: X dep_headcount: Y
Hi All    We are Using the Splunk Enterprise version with the Perpetual License Model with Index Capacity of 5 GB .   We are all of sudden facing issue in the Indexing of the data when the Limi... See more...
Hi All    We are Using the Splunk Enterprise version with the Perpetual License Model with Index Capacity of 5 GB .   We are all of sudden facing issue in the Indexing of the data when the Limit is not yet breached in the Last 30 days  .      Can you please Guide on this case .    
is there a format that needs to be adhered to when using a blacklist with regex?  I am trying to format "New Process Name:" with a regex that will extract events from a specific data source.  I h... See more...
is there a format that needs to be adhered to when using a blacklist with regex?  I am trying to format "New Process Name:" with a regex that will extract events from a specific data source.  I have tested the regex with regex101 and it identifies the events that I want to filter and is basically blacklist3 = New\sProcess\sName\:\s+C\:\\Program\sFiles\s\(x86\)\\...... should that work or do I need to format it something more like blacklist3 = New Process Name = C\:\\Program\sFiles\s\(x86\)\\......  my current blacklist is resulting in  ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - Processing: 'blacklist3' [legacy], range error found in 'regex'...... According to this document, https://docs.splunk.com/Documentation/Splunk/9.0.3/Data/Whitelistorblacklistspecificincomingdata blacklist = <your_custom_regex>  should work.    Thanks
Our client is asking us for information that is stored in the Splunk cloud, and I am not aware of how to access a copy of the information, either because they simply want to have it or because they w... See more...
Our client is asking us for information that is stored in the Splunk cloud, and I am not aware of how to access a copy of the information, either because they simply want to have it or because they want it to be backed up from time to time. The second part of the question is if there is such a way to have a copy of that information how is the restore process?
While adding the Splunk App to Splunk Base I am getting Error :    The "id" field found in app.conf does not match the root folder of the application   My config:  [install] is_configured ... See more...
While adding the Splunk App to Splunk Base I am getting Error :    The "id" field found in app.conf does not match the root folder of the application   My config:  [install] is_configured = 0 install_source_checksum = xxxxx [launcher] author = Abhinav Ranjan description = AccuKnox App for Splunk lets AccuKnox customers and KubeArmor users send alerts from Feeder or Workflows to visualize the data in the AccuKnox Splunk dashboards. AccuKnox, CNAPP that just works, from Build to Runtime. See what your applications are really doing and Automatically generate Zero Trust, least privilege policies to continuously monitor and protect your Network, Application and Data. version = 1.0.0 [package] id = SplunkforAccuKnox [ui] is_visible = 1 label = AccuKnox
When sending batch data to HEC server, with multiple events per request, is it better to send large (10k-100k), medium (1k-10k) or small (>1k) batch data to the HEC server? Is there anything that can... See more...
When sending batch data to HEC server, with multiple events per request, is it better to send large (10k-100k), medium (1k-10k) or small (>1k) batch data to the HEC server? Is there anything that can be done to ensure data is ingested faster and smoother?
Hi I'm Splunk newbie. I'm confused about MC, CM, and LM, so I'm asking a question. 1. Is it true that the monitoring console exists to check the indexer's health or CPU usage? 2. If number 1 i... See more...
Hi I'm Splunk newbie. I'm confused about MC, CM, and LM, so I'm asking a question. 1. Is it true that the monitoring console exists to check the indexer's health or CPU usage? 2. If number 1 is correct, I wonder why there is a license usage tab in the monitoring console menu. Does the monitoring console also check the license pool? (Does it also serve as a license master?) 3. Is it correct to say that the indexer cluster master is a role when divided based on Splunk components, and the monitoring console is a built-in function of the cluster master? Doesn't the monitoring console and the cluster master instance exist separately?
Hi, We have installed the Splunk Universal forwarder on a VIOS server and pushed the TA-metricator-for-nmon. However, we are unable to get any metrics. A look at the internal logs shows the follo... See more...
Hi, We have installed the Splunk Universal forwarder on a VIOS server and pushed the TA-metricator-for-nmon. However, we are unable to get any metrics. A look at the internal logs shows the following error: /splunkforwarder/var/log/metricator/var/nmon_repository/fifo1/nmon_timestamp.dat: A file or directory in the path name does not exist. Is this an install issue or some configuration changes that need to be made? Thanks, AKN @guilmxm 
"Hello everyone, how are you? I am trying to perform a search in the Cylance Protect app, where I have the following event as an example: 2023-02-08T13:25:10.484000Z sysloghost CylancePROTECT - - -... See more...
"Hello everyone, how are you? I am trying to perform a search in the Cylance Protect app, where I have the following event as an example: 2023-02-08T13:25:10.484000Z sysloghost CylancePROTECT - - - Event Type: Threat, Event Name: threat_changed, Device Name: NB-2071, IP Address: (172.47.102.56), File Name: main.exe, Path: C:\DIEF2023.2.0, Drive Type: Internal Hard Drive, File Owner: AUTORIDADE NT\SISTEMA, SHA256: 8B2F7F3120DD73B2C6C4FEA504E60E65886CC9804761F8F1CBE18F92CA20AC44, MD5: 70D778C4A1C17C2EFD2D7F911668E887, Status: Quarantined, Cylance Score: 100, Found Date: 2/8/2023 1:25:10 PM, File Type: Executable, Is Running: False, Auto Run: True, Detected By: FileWatcher, Zone Names: (HOMOLOGAÇÃO), Is Malware: False, Is Unique To Cylance: False, Threat Classification: PUP - Generic, Device Id: 6c4e6c22-bf96-4de4-897b-cea83b8989b4, Policy Name: Política de Proteção N3 - Bloqueio In this case, note the SHA256 parameter, it is the basis of the Panel that I need to create. The thing is that I need to generate a chart that presents the number of different SHA256s that were detected month by month. Observing the following rules: If an SHA256 was detected in January, the chart should count one If the same SHA256 is detected again in February, the chart should count it again However, if the same SHA256 was detected twice in the same month, the chart will only count as one. I tried various different ways to perform this search. However, I was not successful. Here are some examples: eventtype=cylance_index sourcetype=syslog_threat Tenant="$Tenant$" * Status=Quarantined | timechart span=1mon count as Total   this function works, but it's counting the number of monthly events, that is, the same SHA256 is being counted more than once eventtype=cylance_index sourcetype=syslog_threat Tenant="$Tenant$" * Status=Quarantined | dedup SHA256 | stats count as Total by month | timechart span=1mon sum(Total) as Total This time the error was "No Results Found" eventtype=cylance_index sourcetype=syslog_threat Tenant="$Tenant$" * Status=Quarantined | stats count by SHA256, month | timechart span=1mon sum(count) as Total  Again the error of no results found Thank you in advance."
One of my coldbucket indexer lost connection to the SAN and now I have a lot of data and files in my ColdBucket lost+found directory. 1. How can I re-ingest/put these data in their actual folders i... See more...
One of my coldbucket indexer lost connection to the SAN and now I have a lot of data and files in my ColdBucket lost+found directory. 1. How can I re-ingest/put these data in their actual folders in the ColdBucket? 2. What can I do with the data in the lost + found directory located in the ColdBucket?
Getting This error with either HTTPS or SSH connectivity tests. App 'Git' started successfully (id: xxxxxxxxx) on asset: 'bit_bucket'(id: 3) Loaded action execution configuration Querying to ve... See more...
Getting This error with either HTTPS or SSH connectivity tests. App 'Git' started successfully (id: xxxxxxxxx) on asset: 'bit_bucket'(id: 3) Loaded action execution configuration Querying to verify the repo URI Configured repo URI: either https or ssh Error while calling configured URI Connectivity test failed. No action executions found.    I have no real logs and when i try to run it within a playbook I get this error when using the connect ssh option   Failed to push: Check credentials and git server availability
Hi Splunkers, In the Tech Talk under the subject of "7 Tips To Boost Performance of Your Splunk Enterprise Security Operations" there was an app that was supposed to be released a long time ago cal... See more...
Hi Splunkers, In the Tech Talk under the subject of "7 Tips To Boost Performance of Your Splunk Enterprise Security Operations" there was an app that was supposed to be released a long time ago called "Perfinsights". I have searched for it in Splunkbase and google but couldn't find it anywhere! Can someone please point me in the right direction on this? The Tech Talk can be found here: https://www.youtube.com/watch?v=UXFIKMJHwgs
Hello, Can I mix different type of disks, for example SSDs and HDDs, while installing search heads or indexers, in on-premise environment?  Best regards,   
Hi All, I have a field name ip_address which has 50 IP values in it.  at every 5mins interval, I will receive the same values. ip_address 10.1.1.1 10.1.1.2 10.1.1.3 . . . 10.1.1.49 10... See more...
Hi All, I have a field name ip_address which has 50 IP values in it.  at every 5mins interval, I will receive the same values. ip_address 10.1.1.1 10.1.1.2 10.1.1.3 . . . 10.1.1.49 10.1.1.50 What are ways to list down the values which are not coming to splunk. Let's say 10.1.1.2 and 10.1.1.45 are not coming to splunk. Then I need those missing values to be listed in statistical way to create an alert for missing ip address.  What are ways to achieve this. Please help Thanks in advance. 
Hello, what I am trying to do in this search is sum the total CPU seconds, by report class, for a one day period. Once I have that sum, I would like to take it one step further and multiply that sum ... See more...
Hello, what I am trying to do in this search is sum the total CPU seconds, by report class, for a one day period. Once I have that sum, I would like to take it one step further and multiply that sum by our MSU factor to determine the MSUs used by a specific report class for any given day.  I believe what I need to do is store the result from the timechart statement as a new variable, to be able to multiply that variable by the MSUFactor. I have not had any luck in  trying a combination of 'eval' statements or by leveraging the AS keyword to store the result into a variable I can further work with.  I appreciate any help you may be able to offer! index=z* MFSOURCETYPE=SMF030 Subtype=2 `calccpusecs` | where Rptcls IN("RHOTBAT","RPDBATLO","RPDBATMD","RSAGBAT","RTSTBAT") | eval MSUFactor=(37209.3023/5/216000) | timechart span=1d sum(cpusecs) by Rptcls | addcoltotals
Hi All, We are planning to migration KVstore storage engine from mmap to wiredTiger.  I know it is safe to disable kvstore on Indexers but I'm Just wondering what steps to approach if at all we... See more...
Hi All, We are planning to migration KVstore storage engine from mmap to wiredTiger.  I know it is safe to disable kvstore on Indexers but I'm Just wondering what steps to approach if at all we need to upgrade storage engine from mmap to wiredTiger on Indexer Cluster. 
Hi,  Does anyone have a method or an app or query that can check and compare the confs between all SHC members?   Perhaps there is a way with btool or rysnc.   I was given a PS Tech Assessment App... See more...
Hi,  Does anyone have a method or an app or query that can check and compare the confs between all SHC members?   Perhaps there is a way with btool or rysnc.   I was given a PS Tech Assessment App but it is not working correctly.  I don't think the PS Tech knew how to install it or use it. Thank you
I have a dataset that uses some non-segmented character to separate meaningful and commonly-used search terms. Sample events   123,SVCA,ABC123,DEF~AP~SOME_SVC123~1.0,10.0.1.2 ,67e15429-e44c... See more...
I have a dataset that uses some non-segmented character to separate meaningful and commonly-used search terms. Sample events   123,SVCA,ABC123,DEF~AP~SOME_SVC123~1.0,10.0.1.2 ,67e15429-e44c-4c27-bc9a-f3462ae67125,,2023-02-10-12:00:28.578,14,ER40011,"Unauthorized" 123,SVCB,DEF456,DEF~LG~Login~1.0,10.0.1.2,cd63b821-a96c-11ed-8a7c-00000a070dc2,cd63b820-a96c-11ed-8a7c-00000a070dc2,2023-02-10-12:00:28.578,10,0,"OK" 123,SVCC,ZHY789,123~XD-ABC~OtherSvc~2.0,10.0.1.2 ,67e15429-e44c-4c27-bc9a-f3462ae67125,,2023-02-10-12:00:28.566,321,ER00000,"Success" 456,ABC1,,DEFAULT~ENTL~ASvc~1.0,10.0.1.2 ,b70a2c11-286f-44da-9013-854acb1599cd,,2023-02-10-11:59:44.830,14,ER00000,"Success" 456,DEF2,,456~LG~Login~v1.0.0,10.0.0.1,27bee310-a843-11ed-a629-db0c7ca6c807,,2023-02-10-11:59:44.666,300,1,"FAIL" 456,ZHY3,ZHY45678,DEF~AB~ANOTHER_SVC121~1.0,10.0.0.1 ,19b79e9b-e2e2-4ba2-a7cf-e65ba8da5e7b,,2023-02-10-11:58:58.813,,27,ER40011,"Unauthorized"     Users will often search for individual items separated by the ~ character. E.g., index=myindex sourcetype=the_above_sourcetype *LG* My purpose is to reduce the need for leading wildcards in most searches here, as this is a high-volume dataset by adding the minor segmentation character '~' at index time. I've tried these props.conf and segmenters.conf without success. Could anyone provide any insight? <indexer> SPLUNK_HOME/etc/apps/myapp/local/props.conf   [the_above_sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIME_PREFIX = ^([^,]*,){7} TIME_FORMAT = %Y-%m-%d-%H:%M:%S.%3Q TRUNCATE = 10000 MAX_TIMESTAMP_LOOKAHEAD=50 SEGMENTATION = my-custom-segmenter   SPLUNK_HOME/etc/apps/myapp/local/segmenters.conf   [my-custom-segmenter] MINOR = / : = @ . - $ # % \\ _ ~ %7E     Added those and bounced my test instance, but I still cannot search for  index=myindex sourcetype=the_above_sourcetype LG -- does not return results such as these, however *LG* as a term does return it. 456,DEF2,,456~LG~Login~v1.0.0,10.0.0.1,27bee310-a843-11ed-a629-db0c7ca6c807,,2023-02-10-11:59:44.666,300,1,"FAIL"
We want to use ITSI with universal forwarders (windows and nix).  Which is best practice, enable the metrics inputs in UF local or windows/nix add on local?