All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This is really a log4net question but I'm hoping the folks here can help; I have been unsuccessful at searching online for a solution. ----------------- We have a custom application which generates... See more...
This is really a log4net question but I'm hoping the folks here can help; I have been unsuccessful at searching online for a solution. ----------------- We have a custom application which generates local logs in JSON format via the log4net module. We then have a Splunk UF installed to collect said logs. In general that all works fine. The problem is that some log messages include a nested JSON 'message' field -- but log4net is misformatting it as a string and so Splunk doesn't parse the nested part. You can see the issue (below) where log4net is unnecessarily adding quote-marks around the nested part: CURRENT/INVALID   "message":"{"command":"Transform271ToBenefitResponse","ms":1}"   PROPER   "message":{"command":"Transform271ToBenefitResponse","ms":1}   -------------------------- I'm not entirely sure of the log4net configuration but here's what I was told by one of our developers: ORIGINAL LOG4NET CONFIG <conversionPattern value="%utcdate [%property{CorrelationId}] [%property{companyId}] [%property{userId}] [%thread] [%level] %logger - %message%newline" /> UPDATED CONFIG; STILL FAILS <conversionPattern value="{&quot;date&quot;:&quot;%date{ISO8601}&quot;, &quot;correlationId&quot;:&quot;%property{CorrelationId}&quot;, &quot;companyId&quot;:&quot;%property{companyId}&quot;, &quot;userId&quot;:&quot;%property{userId}&quot;, &quot;thread&quot;:&quot;%thread&quot;, &quot;level&quot;:&quot;%level&quot;, &quot;logger&quot;:&quot;%logger&quot;, &quot;message&quot;:&quot;%message&quot;}%newline" />                
I have filed called serial_id which have value ABC2022100845001  I need count with contain 45  in last 5 & 6 th bytes 
I have 2 major questions: 1) I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B an... See more...
I have 2 major questions: 1) I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B and then return Results that are common to both sourcetype.   2)  I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B and then return Results that does not show up on both sourcetypes.      
I am pulling data from multiple locations and a new field threshold has been introduced. The issue is threshold  is common but has different values depending if it is cpuPerc or memoryCons  etc.. Th... See more...
I am pulling data from multiple locations and a new field threshold has been introduced. The issue is threshold  is common but has different values depending if it is cpuPerc or memoryCons  etc.. There are 4 of them.   | mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold     On option I have if i want to display all the thresholds differently is to write a 3 joins - but this is heavy on CPU.   | mstats min("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_cpuPerc | join _time service.name replica.name [| mstats min("mx.process.threads") as nbOfThreads WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_nbOfThreads ] | join _time service.name replica.name [| mstats min("mx.process.memory.usage") as memoryCons WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_memoryCons ] | join _time service.name replica.name [| mstats min("mx.process.file_descriptors") as nbOfOpenFiles WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_nbOfOpenFiles ]   I am trying this way but I am not sure how to merge the rows by time in the end - any ideas.   | mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | search service_name IN (*cache*) | eval threshold_nbOfThreads=if(isnull(nbOfThreads),"",$threshold$) | eval threshold_memoryCons=if(isnull(memoryCons),"",$threshold$) | eval threshold_nbOfOpenFiles=if(isnull(nbOfOpenFiles),"",$threshold$) | table _time threshold threshold_nbOfOpenFiles threshold_memoryCons threshold_nbOfThreads   The issue is the data is now on different rows  - where I need them to be on the same by _time. SO in the image below, we have 3 lines for each time. How can i get them to merge per timestamp?    
I am new to splunk cloud and I would like to install an enterprise security app  ( below screenshot) on my splunk.  and after open the app its should be like below  and finally, I ... See more...
I am new to splunk cloud and I would like to install an enterprise security app  ( below screenshot) on my splunk.  and after open the app its should be like below  and finally, I should be able to see the below screen. Can anyone please help me with this - If you have any doubts about my question please let me know Thanks in advance.
Hi All, I am onboarding data from a heavy forwarder using Splunk TA.  Is it possible to 1) index all logs into one index and route to group A indexers  2) index subset of logs into another index ... See more...
Hi All, I am onboarding data from a heavy forwarder using Splunk TA.  Is it possible to 1) index all logs into one index and route to group A indexers  2) index subset of logs into another index and route to group B indexers? Thanks.
Hi here is the log: 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000003123456] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2... See more...
Hi here is the log: 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000003123456] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000000000030] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2 need to sum values in brackets. expected output: items            total1           total2                    from  CHKIN         200000       3123486           PC1 CHKOUT    20                     0                              PC2   Thanks  
Hi , I  have a index sensitive_data that contains sensitive data. I want to ensure that ONLY one particular user with roles power, user has access to this index, but other users with same roles shou... See more...
Hi , I  have a index sensitive_data that contains sensitive data. I want to ensure that ONLY one particular user with roles power, user has access to this index, but other users with same roles should not have access to this  particualr index.  How to do this reliably? What have I done is I created a LDAP group and mapped the role with this group and allowed the index access to particular role. Please someone helps whether the approach is correct.   [role_power] cumulativeRTSrchJobsQuota = 10 cumulativeSrchJobsQuota = 200 list_storage_passwords = enabled schedule_search = disabled srchDiskQuota = 1000 srchMaxTime = 8640000 rtsearch = disabled srchIndexesAllowed = * srchIndexesDisallowed = sensitive_data [role_user] schedule_search = enabled srchMaxTime = 8640000 srchDiskQuota = 500 srchJobsQuota = 8 srchIndexesAllowed = * srchIndexesDisallowed = sensitive_data [role_sensitive-data-power] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 importRoles = power srchIndexesAllowed = sensitive_data srchMaxTime = 8640000 [role_sensitive-data-user] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 importRoles = user srchIndexesAllowed = sensitive_data srchMaxTime = 8640000 Thanks        
How can I group the start and end time of an station like attachment shows? The startime with X I want to skip, 
Hi Splunkers,   Hopefully I am posting on the correct place, apologies if not! I have the following code/SPL from inside the XML form. It looks inside a lookup, and then gives information about ... See more...
Hi Splunkers,   Hopefully I am posting on the correct place, apologies if not! I have the following code/SPL from inside the XML form. It looks inside a lookup, and then gives information about a specific field (field name taken from variable "FieldName") which matches the value of SearchString (value taken from variable "SearchString").   | inputlookup $lookup_name$ | search $FieldName$=$SearchString$   Those experienced you will see that it doesn't work this way. I am assuming that to make this XML code to work and give me the search result I expect I need to expand the variables?   If so, any idea how to do that? Regards, vagnet
Hey All, I get no results found for a tag that looks for fields created by a rex. So... sourcetype=DataServices | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" i get the following field with r... See more...
Hey All, I get no results found for a tag that looks for fields created by a rex. So... sourcetype=DataServices | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" i get the following field with results Now i want to bunch some field values together so i create a tag containing the field values i care about added to my search but i get no results found... sourcetype=DataServices tag=GB1_BIME | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" Greatly appreciated if someone could help?  
Hi, I am using splunk DB connect 2.1.4 to get data from A table in Oracle database, (table with around 1000 transactions per second, real time sync) to splunk server (running as heavy forwarder) -->... See more...
Hi, I am using splunk DB connect 2.1.4 to get data from A table in Oracle database, (table with around 1000 transactions per second, real time sync) to splunk server (running as heavy forwarder) --> It works ok. My query in DB Connect (2.1.4) look like this. SEQ_NO as rising column. For some reason, i need to upgrade to DB connect 3.6 (latest). So i created new heavy server installed DB connect 3.6. It works fail. Some data Duplicated, some missed. My query in DB Connect (3.6) look like this. SEQ_NO as rising column. Please let me know as how can I solve the issue. Thanks!
Hi,  I'm trying to configure TA-symantec_atp v1.5.0 on Splunk 8.1.6 version of splunk but nothing happens when I try to save the config in UI page.  I found below errors in "/opt/splunk/var/log/s... See more...
Hi,  I'm trying to configure TA-symantec_atp v1.5.0 on Splunk 8.1.6 version of splunk but nothing happens when I try to save the config in UI page.  I found below errors in "/opt/splunk/var/log/splunk/python.log": 2021-09-22 13:18:41,150 +0200 ERROR __init__:164 - The REST handler module "email_symantec_util" could not be found. Python files must be in $SPLUNK_HOME/etc/apps/$MY_APP/bin/ 2021-09-22 13:18:41,150 ERROR The REST handler module "email_symantec_util" could not be found. Python files must be in $SPLUNK_HOME/etc/apps/$MY_APP/bin/ 2021-09-22 13:18:41,151 +0200 ERROR __init__:165 - No module named 'rapid_diag' Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 161, in dispatch module = __import__('splunk.rest.external.%s' % parts[0], None, None, parts[0]) File "/opt/splunk/etc/apps/TA-symantec_atp/bin/email_symantec_util.py", line 6, in <module> from . import logger_manager File "/opt/splunk/etc/apps/splunk_rapid_diag/bin/logger_manager.py", line 14, in <module> from rapid_diag.util import get_splunkhome_path, get_app_conf ModuleNotFoundError: No module named 'rapid_diag' And "/opt/splunk/var/log/splunk/web_service.log": 2021-09-22 13:24:03,700 ERROR [614b1253af7ff740791c10] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:05,706 ERROR [614b1255af7ff75b72bcd0] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:07,698 ERROR [614b1257ad7ff740411e50] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:09,702 ERROR [614b1259ae7ff740791790] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js Background: I'm currently using TA-symantec_atp v1.3.0 with Splunk 7.3.2 but I want to upgrade to Splunk 8.1.X and only TA-symantec_atp v1.5.0 is compatible with 8.1.x and above (python 3) I've tried to install and configure v1.5.0 of the addon on several machines running Splunk 8.1.x but all resulted in same error described above.  Does anybody had this TA working? 
The default page needs to be changed, after login to Splunk I should be directed to all the triggered alert page.  Eg: below are the triggered alert, After login I should be able to see the aler... See more...
The default page needs to be changed, after login to Splunk I should be directed to all the triggered alert page.  Eg: below are the triggered alert, After login I should be able to see the alert which is triggered with time seviarity and other common information and after analyzing I should be able to close the triggered alert. Can any one please guide me on this ?  
I'm trying to plan an, for me, large deployment. Connecting several sites to a headquarter. Each site does have from 200 to 900 clients who need to send data to the indexer. on two locations the band... See more...
I'm trying to plan an, for me, large deployment. Connecting several sites to a headquarter. Each site does have from 200 to 900 clients who need to send data to the indexer. on two locations the bandwith is relatively poor so the idea is to set up a cluster of two indexers locally on those locations. in the headquarter we'd set up a cluster of three indexers for all other locations who don't have a cluster and of course for the local data that pours in.    location: within the location I thought of forwarders for the servers (obviously) and then a heavy/intermediate forwarder which collects all those logs and sends it to the index cluster in the headquarter. in locations who need their own cluster, we'd set up a two server cluster . headquarter:  there we set up a three server cluster and a master node. two searchheads (one for the technical part and one for enterprise security) and a deployment server to forward apps to all the connected clients in all locations.  the searchhead should send its requests not only to the headquarter cluster but to all locations. I've read that it is possible to forward the sh requests through multiple clusters. the clusters don't talk with each other so there is no data exchange between the headquarter cluster and the two other clusters.    for better understanding, since this is complex zu explain, I've generated a draw.io architecture view. my questions to the feasibility of this architecture are as follows:    - do we need a master node for each cluster or can a master node manage multiple clusters? provided that the locations with their own cluster don't sync with the headquarter and are only processing requests from the SHs  - is there a minimum bandwidth that should be provided? splunk docs says 1 Gbit/s but I think for this kind of deployment 10-50 Mbit/s are enough?  - Is it basically better to  just make a larger central cluster with more servers in the headquarter or what is better performing? my fear is to set all of this up and then fall on my face cause I didn't scale the servers correctly.  - is one deployment server for all locations and all servers feasible or should I set up an deployment server at least for the locations with their own cluster too?   all specs should be in the draw.io few. hope you can answer my questions. I know it's a lot, but I'm afraid I may scale it wrong and then, months into the setup, I notice what is wrong.          thanks a lot for your tipps and suggestions! question: on location with their own cluster, do they need their own master node?
Hello Splunkers !!   Iam getting below error on one of my HF. How can i resolve this issue or overcome with the below issue. Please provide the expert suggestion on this.   Tcpout Processor: The ... See more...
Hello Splunkers !!   Iam getting below error on one of my HF. How can i resolve this issue or overcome with the below issue. Please provide the expert suggestion on this.   Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group itops_mulesoft_prod_hf has been blocked for 100 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   Thanks in advance.  
State Date Desc Count bc 11102021 vm 234569 bc 12102021 vm 456328 bc 11102021 vm 234569 bc 12102021 vm 4532178 cd 11102021 vm 234000 cd 12102021 vm 234000 cd 11102021 vm 234000 cd 12102021 vm... See more...
State Date Desc Count bc 11102021 vm 234569 bc 12102021 vm 456328 bc 11102021 vm 234569 bc 12102021 vm 4532178 cd 11102021 vm 234000 cd 12102021 vm 234000 cd 11102021 vm 234000 cd 12102021 vm 568902 From the stats output (such as above),I would like to first group them as per state, then compare the count[0] with that of count[1] and then count[2] with count [3] and then count [3] with count [0],if any one matches then should be displayed as result. In the above case for state=cd,index[0] and index[1] are same,so the expect result is State Date Desc Count cd 11102021 vm 234000 cd 12102021 vm 234000 Please assist
Hi, Is it possible when using Global Account to customise the fields? i.e. add other fields than only Username and Password. Ideally, I would like to be able to specify an Account Name, API Key and... See more...
Hi, Is it possible when using Global Account to customise the fields? i.e. add other fields than only Username and Password. Ideally, I would like to be able to specify an Account Name, API Key and Authorization header in the Account tab, so that when configuring each input, I can choose the Account Name from a drop-down and have it pull the API Key and Authorization Header fields. I can specify these as "Additional Parameters" for the add-on, but then I have no way of selecting these on a per-input basis when configuring each input.
hello, everyone I have a question about how to write a subquery in Splunk. for example I would like to get a list of productId that was returned, but later was not purchased again. NOT IN Subquer... See more...
hello, everyone I have a question about how to write a subquery in Splunk. for example I would like to get a list of productId that was returned, but later was not purchased again. NOT IN Subquery part. How can I accomplish this? index=main sourcetype=access_combined_wcookie action=returned NOT IN [search index=main sourcetype=access_combined_wcookie action=purchase |table clientip] | stats count, dc(productId) by clientip Thank you in advance  
Hi, I need to install the below add-on, this add-on creates indexes and required roles, we dont want the add-on to control the indexes, so indexes.conf in this add-on is taken out , and we will crea... See more...
Hi, I need to install the below add-on, this add-on creates indexes and required roles, we dont want the add-on to control the indexes, so indexes.conf in this add-on is taken out , and we will create the indexes.  We want 14 indexes to be created, but is it Ok to go with one index and have different sourcetypes? We do have other configs provided in the add-on, will there be any impact if we go with one index with multiple sourcetypes also during add-on update, will there be any impact?   https://splunkbase.splunk.com/app/4153/