All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers! Any one able to assist me with a search that I am trying to create below.  I want to extract some data from multiple json data value fields. I am able to ingest the json data an... See more...
Hi Splunkers! Any one able to assist me with a search that I am trying to create below.  I want to extract some data from multiple json data value fields. I am able to ingest the json data and extract the fields and values correctly.  Sample data below  { "key1": "value1", "key2": "[field1 = field_value1] [field2 = field_value2] [field3 = field_value3] [field4 = field_value4]", "key3": "value3", "key4": "[field5 = field_value5] [field6 = field_value6] " } I am trying to create the proper search to extract all of the fields and their respective field_values from the sample data above. Thanks in advance! 
Hi Guys, I am wondering if there is a way that fields can be linked (sort of like aliases in PowerShell if that helps for context).  For example, the default host field and a field named "computer_... See more...
Hi Guys, I am wondering if there is a way that fields can be linked (sort of like aliases in PowerShell if that helps for context).  For example, the default host field and a field named "computer_fqdn" give the same value. However, if you search for "host=examplename" but the event that you're looking for uses the "computer_fqdn" field, the event won't show. I'm happy to answer any questions regarding this issue. Thanks in advance, Jamie
Hello network, Hope this message finds you All well. I have a challenge I would like to solve and I am sure with your help this can be done. Problem: The following scenario represents our desire t... See more...
Hello network, Hope this message finds you All well. I have a challenge I would like to solve and I am sure with your help this can be done. Problem: The following scenario represents our desire to be able to index the highlighted data/kv pairs only and dropping all of the rest from the event sample PSB (please let us know if you would need the original regex and the obfuscated event sample). The regex properly selects what I would like to keep before the data gets indexed on the idexers (I learnt this can be done on the indexers not just on the Heavy Forwarder/s, which we would like to avoid in our forwarding topology approach). Unfortunately, we cannot do this via the Ingest Actions Ruleset UI on the Cluster Manager (it is designed to drop off the entire event matching a regex not particular parts of that event). Question: Is there a way we can use props and transforms or any other mechanism to govern the dropping of the unwanted parts from the event above, during the parsing phase and before the data gets indexed? Thank you!  
Hello all, since we can set the setting "srchDiskQuota" for each role in the authorize.conf I would like to know if there is a way to find out how much of the provided disk size has been really use... See more...
Hello all, since we can set the setting "srchDiskQuota" for each role in the authorize.conf I would like to know if there is a way to find out how much of the provided disk size has been really used by a specific role or user?  I want to make sure to not reach the srchDiskQuota limit soon. Until now I couldnt find anything like this within the monitoring console. Thanks for a short feedback would be really appreciated.
Hello please I would like to know if there is a schematic architecture that explains how Splunk SOAR works (where we can see the components, the relationship between them, how they work etc.) below... See more...
Hello please I would like to know if there is a schematic architecture that explains how Splunk SOAR works (where we can see the components, the relationship between them, how they work etc.) below is an example regarding the architecture of Splunk Enterprise.    
Events: Message = "This system has RPC error" Message = "This system has login failure error" Message = "This system has DCOM error" Message = "This system has RPC error" Message = "This system... See more...
Events: Message = "This system has RPC error" Message = "This system has login failure error" Message = "This system has DCOM error" Message = "This system has RPC error" Message = "This system has login failure error" Message = "This system has DCOM error" Message = "This system has RPC error" Message = "This system has login failure error" Message = "This system has DCOM error" Message = "This system has RPC remote error" Message = "This system has login failure error" Message = "This system has DCOM error" Message = "This system has RPC error" Message = "This system has login failure error" Message = "This system has DCOM issue" Message = "This system has RPC error" Message = "This system has login failure error" Message = "This system has DCOM error" Message = "This system has login failure error" Message = "This system has DCOM error" Message = "This system has no error" Message = "This system has fatal error" Message = "This system has no fatal error" Message = "This system has no CPU error" Message = "This system has memory issue" How do i search in the above Events Message to count for DCOM, RPC, login ? For example: in the above example how should I get the results as below: DCOM = 7 RPC = 6 login = 7 Total Message count = 25 Thanks for your time!
Hi We are getting the following error message, I think I have a few options, but I am not sure what is the best. I have read this but still not sure what to do. https://docs.splunk.com/Documentati... See more...
Hi We are getting the following error message, I think I have a few options, but I am not sure what is the best. I have read this but still not sure what to do. https://docs.splunk.com/Documentation/Splunk/8.0.0/Indexer/Anomalousbuckets What are the pros and cons of each option? Or do I run a data rebalancing? On one Index, in this case, its a small index, so I should finish quickly... @pravin   
I've followed all the steps from CLI as mentioned in splunk Documentation and after that I'm upgrading the ITSI from the GUI,  ITSI upgradation version 4.11.4 to version 4.13.0. But encountered er... See more...
I've followed all the steps from CLI as mentioned in splunk Documentation and after that I'm upgrading the ITSI from the GUI,  ITSI upgradation version 4.11.4 to version 4.13.0. But encountered error  One or more problems occurred that prevented the upgrade process from completing. Looking into the logs, I found :     process:41611 thread:MainThread ERROR [itsi.migration] [__init__:1413] [exception] 4.11.0 to 4.12.0: [Errno 13] Permission denied PermissionError: [Errno 13] Permission denied: '/opt/splunk/etc/apps/<app_name>/lib/backup_4_11_0_2023-05-22T09_04_06_476132+00_00' process:41611 thread:MainThread ERROR [itsi.migration] [itsi_migration:4430] [run_migration] Error occurred while disabling/enabling the [$SPLUNK_HOME/etc/apps/<app_name>/bin/import_icons_app_name.py] scripted input: [HTTP 500] Splunkd internal error; [{'type': 'ERROR', 'code': None, 'text': "\n In handler 'script': Could not flush changes to disk: /nobody/system: ConfObjectManager: /opt/splunk/etc/apps/<app_name>/metadata/local.meta"}]        
Hi All, If someone has setup MS365 and MS Teams on Splunk before can please guide me what details are required to open a firewall. As per add on we must first need to register the app on Microsoft ... See more...
Hi All, If someone has setup MS365 and MS Teams on Splunk before can please guide me what details are required to open a firewall. As per add on we must first need to register the app on Microsoft Azure AD. So a firewall is needed from MS Azure AD to Splunk HF ?   https://splunkbase.splunk.com/app/4055 https://splunkbase.splunk.com/app/4994 @tomapatan @jasonabbott @AntoineDRN 
We are currently required to upgrade our Splunk environment from version 8.2.4 to version 9.x, and we are concerned about any differences in Splunk SPL function functionality, as we encountered "stre... See more...
We are currently required to upgrade our Splunk environment from version 8.2.4 to version 9.x, and we are concerned about any differences in Splunk SPL function functionality, as we encountered "streamstats" during our previous upgrade to version 8.2.4. below are the components of our environment: 1- Three heavy forwarders 2- Clustering of indexers: One master machine and three indexers 3- Search Clustering: One SHC and three Search members 4- Standalone search machine 5- Two Splunk DB Connect machines  6- Deployer used to manage external universal forwarders installed on external nodes to collect real-time data. So, if anyone encounters any major issues following the upgrade, please share them with me so that they can be taken into account, and this will be greatly appreciated.
How I can convert a single value in dashboard to rangemap icon with threshold  like the image following:  
Hello Splunkers I want to customize the color of the bars based on condition like the example where ID, DateChanged,PlannedDate_1 and PlannedDate_2 are columns and the Date format is dd.mm.yyyy HH:... See more...
Hello Splunkers I want to customize the color of the bars based on condition like the example where ID, DateChanged,PlannedDate_1 and PlannedDate_2 are columns and the Date format is dd.mm.yyyy HH:mm and there is a dublication in the ID so I wanted to take the latest Dates  Green --> All the ID with the DateChanged = no Red --> All the ID with the DateChanged = yes AND PlannedDate_1 and PlannedDate_2 have a gap of less than 14 days yellow --> All the ID with the DateChanged = yes AND PlannedDate_1 and PlannedDate_2 have a gap between 14-20 days Green --> All the ID with the DateChanged = yes AND PlannedDate_1 and PlannedDate_2 have a gap of more than 20 days
index=mail [ | inputlookup 123.csv | rename address AS query | fields query ] | dedup MessageTraceId | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where is... See more...
index=mail [ | inputlookup 123.csv | rename address AS query | fields query ] | dedup MessageTraceId | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | stats values(RecipientAddress) as Recipient values(Subject) as Subject earliest(_time) AS "Earliest" latest(_time) AS "Latest" values(Status) as Status by RecipientDomain SenderAddress | eval subject_count=mvcount(Subject) | sort - subject_count | convert ctime("Latest") | convert ctime("Earliest")   hi i have another column call date in the 123.csv , after running the query, those results which match the csv , show the date as well from 123.csv in 1 column. Please help.
Hi Community, I had install OpenTelemetry Agent on Windows 2012 R2 and trying to monitor SQL Server 2012. So, I have configured agent.yaml as below and restarted the service but we couldn't see dat... See more...
Hi Community, I had install OpenTelemetry Agent on Windows 2012 R2 and trying to monitor SQL Server 2012. So, I have configured agent.yaml as below and restarted the service but we couldn't see data in infrastructure section. Please suggest what could be the issue! smartagent/sqlserver: type: telegraf/sqlserver host: xxxxxxxxx port: 1433 userID: xxxxxx password: xxxxxxxx Regards, Eshwar
I uploaded the Splunk Cloud private app, the app validation and installation went well, but it doesn't work properly with a command not found error. Below is a simple my app setup, with a simple co... See more...
I uploaded the Splunk Cloud private app, the app validation and installation went well, but it doesn't work properly with a command not found error. Below is a simple my app setup, with a simple command setup and a Python script exists in [app directory]/bin.   * Splunk Cloud Uploaded apps(sucessfully installed)   * commands.conf   [kk] filename = test_kaggle.py python.version = python3 chunked = true [kkh] filename = test_sphec.py python.version = python3 chunked = true [kks] filename = test_spsyn.py python.version = python3 chunked = true     It was confirmed that it works normally in Splunk Enterprise. So, I compared Splunk Enterprise and Splunk Cloud. When I view objects in Splunk App Management, Splunk Enterprise and Splunk Cloud appear differently as shown below.     * My app in Splunk Enterprise           * My app in Splunk Cloud       To further check, I downloaded the Config Explorer app from splunkbase and installed it on sklunk Enterprise and Splunk Cloud, respectively, and compared them. It also installed and ran fine on Splunk Enterprise. In the skunk cloud, the installation went well, but in the object view, It didn't work as it appeared different.     * Config Explorer in Splunk Enterprise     * Config Explorer in Splunk Cloud (There are no configurations)       Config explorer also works in Splunk Enterprise as shown in the image below, but does not work in Splunk Cloud. Is there a separate setting when installing from Splunk Cloud?     * Config Explorer in Splunk Enterprise       * Config Explorer in Splunk Cloud            
we had a vendor setup our Splunk instance and configure a "Brute Force Attack" alert with the following query. --- orginal brute force alert ---- | tstats summariesonly=t allow_old_summaries=t cou... See more...
we had a vendor setup our Splunk instance and configure a "Brute Force Attack" alert with the following query. --- orginal brute force alert ---- | tstats summariesonly=t allow_old_summaries=t count from datamodel=Authentication by Authentication.action, Authentication.src | rename Authentication.src as source, Authentication.action as action | chart last(count) over source by action | where success>0 and failure>20 | sort -failure | rename failure as failures | fields - success, unknown This seemed to be working OK, but lately we've been getting a lot of emails from it. Most I've fixed, it was a bad password in a automated job. But the last one left on my list, the source is listed as "unknown" and I cant seem to find any more information about it. I'm new to splunk so probably not looking in the correct place the correct way. Has anyone got any suggestions on how to track down what it might be ?
I want to convert some of the below individual json objects in the event into nested single json object like the second example Current Format { "ID": 1, "Timestamp": "2023-05-18T05:07:59.9... See more...
I want to convert some of the below individual json objects in the event into nested single json object like the second example Current Format { "ID": 1, "Timestamp": "2023-05-18T05:07:59.940594300Z", "FileVersion": "10.0.17134.1 (WinBuild.160101.0800)", "Company": "Microsoft Corporation", "TerminalSessionId": 0, "UtcTime": "2018-08-20 15:18:59.929", "Product": "Microsoft® Windows® Operating System", } Expected Format { "ID": 1, "Timestamp": "2023-05-18T05:07:59.940594300Z", "EventData":{ "FileVersion": "10.0.17134.1 (WinBuild.160101.0800)", "Company": "Microsoft Corporation", "TerminalSessionId": 0, "UtcTime": "2018-08-20 15:18:59.929", "Product": "Microsoft® Windows® Operating System", } } I have tried to playaround with json functions but unable to figure out how to achieve the above outcome. Can someone please help ?
Hello I would like to count the number of values every time the search matches the string  alternativeIdValue- i.e. after the = count the values separated by ,   alternativeIdValue=IE778898,WX8... See more...
Hello I would like to count the number of values every time the search matches the string  alternativeIdValue- i.e. after the = count the values separated by ,   alternativeIdValue=IE778898,WX888889,IS89657578,XS76575889.............................................   The length of each value can be different and  the only common pattern is they are separated by comma and they start with an alphabet.    
Field is not extracted properly for Windows event log  where Ip address mark as "Client IP" Try to extract Field below Regexs but no luck, in _internal logs Regexs was applied to the prorp.conf Suc... See more...
Field is not extracted properly for Windows event log  where Ip address mark as "Client IP" Try to extract Field below Regexs but no luck, in _internal logs Regexs was applied to the prorp.conf Successfully. Please Suggest if anyone face this issue. Regex 1 (?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) Regex 2 [^+]Client\s+IP\:\s+(?<ip>\d+.\d+.\d+.\d+)\s+