All Topics

Top

All Topics

Hi here is the log: 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000003123456] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2... See more...
Hi here is the log: 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000003123456] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000000000030] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2 need to sum values in brackets. expected output: items            total1           total2                    from  CHKIN         200000       3123486           PC1 CHKOUT    20                     0                              PC2   Thanks  
Hi , I  have a index sensitive_data that contains sensitive data. I want to ensure that ONLY one particular user with roles power, user has access to this index, but other users with same roles shou... See more...
Hi , I  have a index sensitive_data that contains sensitive data. I want to ensure that ONLY one particular user with roles power, user has access to this index, but other users with same roles should not have access to this  particualr index.  How to do this reliably? What have I done is I created a LDAP group and mapped the role with this group and allowed the index access to particular role. Please someone helps whether the approach is correct.   [role_power] cumulativeRTSrchJobsQuota = 10 cumulativeSrchJobsQuota = 200 list_storage_passwords = enabled schedule_search = disabled srchDiskQuota = 1000 srchMaxTime = 8640000 rtsearch = disabled srchIndexesAllowed = * srchIndexesDisallowed = sensitive_data [role_user] schedule_search = enabled srchMaxTime = 8640000 srchDiskQuota = 500 srchJobsQuota = 8 srchIndexesAllowed = * srchIndexesDisallowed = sensitive_data [role_sensitive-data-power] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 importRoles = power srchIndexesAllowed = sensitive_data srchMaxTime = 8640000 [role_sensitive-data-user] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 importRoles = user srchIndexesAllowed = sensitive_data srchMaxTime = 8640000 Thanks        
How can I group the start and end time of an station like attachment shows? The startime with X I want to skip, 
Hi Splunkers,   Hopefully I am posting on the correct place, apologies if not! I have the following code/SPL from inside the XML form. It looks inside a lookup, and then gives information about ... See more...
Hi Splunkers,   Hopefully I am posting on the correct place, apologies if not! I have the following code/SPL from inside the XML form. It looks inside a lookup, and then gives information about a specific field (field name taken from variable "FieldName") which matches the value of SearchString (value taken from variable "SearchString").   | inputlookup $lookup_name$ | search $FieldName$=$SearchString$   Those experienced you will see that it doesn't work this way. I am assuming that to make this XML code to work and give me the search result I expect I need to expand the variables?   If so, any idea how to do that? Regards, vagnet
Hey All, I get no results found for a tag that looks for fields created by a rex. So... sourcetype=DataServices | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" i get the following field with r... See more...
Hey All, I get no results found for a tag that looks for fields created by a rex. So... sourcetype=DataServices | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" i get the following field with results Now i want to bunch some field values together so i create a tag containing the field values i care about added to my search but i get no results found... sourcetype=DataServices tag=GB1_BIME | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" Greatly appreciated if someone could help?  
Hi, I am using splunk DB connect 2.1.4 to get data from A table in Oracle database, (table with around 1000 transactions per second, real time sync) to splunk server (running as heavy forwarder) -->... See more...
Hi, I am using splunk DB connect 2.1.4 to get data from A table in Oracle database, (table with around 1000 transactions per second, real time sync) to splunk server (running as heavy forwarder) --> It works ok. My query in DB Connect (2.1.4) look like this. SEQ_NO as rising column. For some reason, i need to upgrade to DB connect 3.6 (latest). So i created new heavy server installed DB connect 3.6. It works fail. Some data Duplicated, some missed. My query in DB Connect (3.6) look like this. SEQ_NO as rising column. Please let me know as how can I solve the issue. Thanks!
Hi,  I'm trying to configure TA-symantec_atp v1.5.0 on Splunk 8.1.6 version of splunk but nothing happens when I try to save the config in UI page.  I found below errors in "/opt/splunk/var/log/s... See more...
Hi,  I'm trying to configure TA-symantec_atp v1.5.0 on Splunk 8.1.6 version of splunk but nothing happens when I try to save the config in UI page.  I found below errors in "/opt/splunk/var/log/splunk/python.log": 2021-09-22 13:18:41,150 +0200 ERROR __init__:164 - The REST handler module "email_symantec_util" could not be found. Python files must be in $SPLUNK_HOME/etc/apps/$MY_APP/bin/ 2021-09-22 13:18:41,150 ERROR The REST handler module "email_symantec_util" could not be found. Python files must be in $SPLUNK_HOME/etc/apps/$MY_APP/bin/ 2021-09-22 13:18:41,151 +0200 ERROR __init__:165 - No module named 'rapid_diag' Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 161, in dispatch module = __import__('splunk.rest.external.%s' % parts[0], None, None, parts[0]) File "/opt/splunk/etc/apps/TA-symantec_atp/bin/email_symantec_util.py", line 6, in <module> from . import logger_manager File "/opt/splunk/etc/apps/splunk_rapid_diag/bin/logger_manager.py", line 14, in <module> from rapid_diag.util import get_splunkhome_path, get_app_conf ModuleNotFoundError: No module named 'rapid_diag' And "/opt/splunk/var/log/splunk/web_service.log": 2021-09-22 13:24:03,700 ERROR [614b1253af7ff740791c10] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:05,706 ERROR [614b1255af7ff75b72bcd0] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:07,698 ERROR [614b1257ad7ff740411e50] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js 2021-09-22 13:24:09,702 ERROR [614b1259ae7ff740791790] utility:58 - name=javascript, class=Splunk.Error, lineNumber=272, message=Uncaught TypeError: Cannot read properties of undefined (reading 'data'), fileName=https://localhost:8443/en-US/static/@071D8440E5D1A785ECFF180D1ECF4589ACA117B332BB46A44AF934EFD3BCE244.13:1/app/TA-symantec_atp/application.js Background: I'm currently using TA-symantec_atp v1.3.0 with Splunk 7.3.2 but I want to upgrade to Splunk 8.1.X and only TA-symantec_atp v1.5.0 is compatible with 8.1.x and above (python 3) I've tried to install and configure v1.5.0 of the addon on several machines running Splunk 8.1.x but all resulted in same error described above.  Does anybody had this TA working? 
The default page needs to be changed, after login to Splunk I should be directed to all the triggered alert page.  Eg: below are the triggered alert, After login I should be able to see the aler... See more...
The default page needs to be changed, after login to Splunk I should be directed to all the triggered alert page.  Eg: below are the triggered alert, After login I should be able to see the alert which is triggered with time seviarity and other common information and after analyzing I should be able to close the triggered alert. Can any one please guide me on this ?  
I'm trying to plan an, for me, large deployment. Connecting several sites to a headquarter. Each site does have from 200 to 900 clients who need to send data to the indexer. on two locations the band... See more...
I'm trying to plan an, for me, large deployment. Connecting several sites to a headquarter. Each site does have from 200 to 900 clients who need to send data to the indexer. on two locations the bandwith is relatively poor so the idea is to set up a cluster of two indexers locally on those locations. in the headquarter we'd set up a cluster of three indexers for all other locations who don't have a cluster and of course for the local data that pours in.    location: within the location I thought of forwarders for the servers (obviously) and then a heavy/intermediate forwarder which collects all those logs and sends it to the index cluster in the headquarter. in locations who need their own cluster, we'd set up a two server cluster . headquarter:  there we set up a three server cluster and a master node. two searchheads (one for the technical part and one for enterprise security) and a deployment server to forward apps to all the connected clients in all locations.  the searchhead should send its requests not only to the headquarter cluster but to all locations. I've read that it is possible to forward the sh requests through multiple clusters. the clusters don't talk with each other so there is no data exchange between the headquarter cluster and the two other clusters.    for better understanding, since this is complex zu explain, I've generated a draw.io architecture view. my questions to the feasibility of this architecture are as follows:    - do we need a master node for each cluster or can a master node manage multiple clusters? provided that the locations with their own cluster don't sync with the headquarter and are only processing requests from the SHs  - is there a minimum bandwidth that should be provided? splunk docs says 1 Gbit/s but I think for this kind of deployment 10-50 Mbit/s are enough?  - Is it basically better to  just make a larger central cluster with more servers in the headquarter or what is better performing? my fear is to set all of this up and then fall on my face cause I didn't scale the servers correctly.  - is one deployment server for all locations and all servers feasible or should I set up an deployment server at least for the locations with their own cluster too?   all specs should be in the draw.io few. hope you can answer my questions. I know it's a lot, but I'm afraid I may scale it wrong and then, months into the setup, I notice what is wrong.          thanks a lot for your tipps and suggestions! question: on location with their own cluster, do they need their own master node?
Hello Splunkers !!   Iam getting below error on one of my HF. How can i resolve this issue or overcome with the below issue. Please provide the expert suggestion on this.   Tcpout Processor: The ... See more...
Hello Splunkers !!   Iam getting below error on one of my HF. How can i resolve this issue or overcome with the below issue. Please provide the expert suggestion on this.   Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group itops_mulesoft_prod_hf has been blocked for 100 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   Thanks in advance.  
State Date Desc Count bc 11102021 vm 234569 bc 12102021 vm 456328 bc 11102021 vm 234569 bc 12102021 vm 4532178 cd 11102021 vm 234000 cd 12102021 vm 234000 cd 11102021 vm 234000 cd 12102021 vm... See more...
State Date Desc Count bc 11102021 vm 234569 bc 12102021 vm 456328 bc 11102021 vm 234569 bc 12102021 vm 4532178 cd 11102021 vm 234000 cd 12102021 vm 234000 cd 11102021 vm 234000 cd 12102021 vm 568902 From the stats output (such as above),I would like to first group them as per state, then compare the count[0] with that of count[1] and then count[2] with count [3] and then count [3] with count [0],if any one matches then should be displayed as result. In the above case for state=cd,index[0] and index[1] are same,so the expect result is State Date Desc Count cd 11102021 vm 234000 cd 12102021 vm 234000 Please assist
Hi, Is it possible when using Global Account to customise the fields? i.e. add other fields than only Username and Password. Ideally, I would like to be able to specify an Account Name, API Key and... See more...
Hi, Is it possible when using Global Account to customise the fields? i.e. add other fields than only Username and Password. Ideally, I would like to be able to specify an Account Name, API Key and Authorization header in the Account tab, so that when configuring each input, I can choose the Account Name from a drop-down and have it pull the API Key and Authorization Header fields. I can specify these as "Additional Parameters" for the add-on, but then I have no way of selecting these on a per-input basis when configuring each input.
hello, everyone I have a question about how to write a subquery in Splunk. for example I would like to get a list of productId that was returned, but later was not purchased again. NOT IN Subquer... See more...
hello, everyone I have a question about how to write a subquery in Splunk. for example I would like to get a list of productId that was returned, but later was not purchased again. NOT IN Subquery part. How can I accomplish this? index=main sourcetype=access_combined_wcookie action=returned NOT IN [search index=main sourcetype=access_combined_wcookie action=purchase |table clientip] | stats count, dc(productId) by clientip Thank you in advance  
Hi, I need to install the below add-on, this add-on creates indexes and required roles, we dont want the add-on to control the indexes, so indexes.conf in this add-on is taken out , and we will crea... See more...
Hi, I need to install the below add-on, this add-on creates indexes and required roles, we dont want the add-on to control the indexes, so indexes.conf in this add-on is taken out , and we will create the indexes.  We want 14 indexes to be created, but is it Ok to go with one index and have different sourcetypes? We do have other configs provided in the add-on, will there be any impact if we go with one index with multiple sourcetypes also during add-on update, will there be any impact?   https://splunkbase.splunk.com/app/4153/
Hello Splunk Gurus, I am trying to generate tabular data for the API requests. Following is the query to extract below table data The FirstComp, SecondComp and ThirdComp are fields extracted at ru... See more...
Hello Splunk Gurus, I am trying to generate tabular data for the API requests. Following is the query to extract below table data The FirstComp, SecondComp and ThirdComp are fields extracted at run time from log. index=micro host=app150*usa.com "API Timeline" | rex field=_raw "FirstCompTime:(?<FirstComp>[^\,]+)" | rex field=_raw "SecondCompTime:(?<SecondComp>[^\,]+)" | rex field=_raw "ThirdCompTime:(?<ThirdComp>[^\,]+)" | table FirstComp, SecondComp, ThirdComp FirstComp SecondComp ThirdComp 78 25 31 80 22 34 81 26 36   Now I need to calculate the 95th and 99th percentile and making sure components name appear as part of first column as shown  below- Components 95th percentile 99th percentile FirstComp 77 79 SecondComp 23 24 ThirdComp 32 35   The desired output should show 99th percentile & 95th percentile by different component id. So eventually i want to bring column headers name as part of first column's value and next two column should have respective 99th percentile & 95th percentile. Thanks in advance for your time and help. Tanzy
We have recently migrated from On-prem to Splunk cloud.  Current setup is : UFs ( several of them) --> 2x HFs --> Splunk Cloud On one of the UF hosts, my outputs.conf is currently configured to sen... See more...
We have recently migrated from On-prem to Splunk cloud.  Current setup is : UFs ( several of them) --> 2x HFs --> Splunk Cloud On one of the UF hosts, my outputs.conf is currently configured to send events to both the ON-prem Indexer ( for business reasons) and the HF. This is so that, the events are seen on both On-prem indexer and Cloud indexer for a short period of time until On-prem is retired completed. When I query on the index on  on-prem and cloud for the same time interval, I see my events breaking is interpreted differently on Cloud compared to on-prem even though both search heads have the same props.conf for my source type. For example, for a particular search string on both search heads on the same index, i get different event counts. Also, on the cloud, my events are not breaking as per the regex defined for line breaking . And some events are completely missing on Cloud even though I can see them  on on-prem search head. Props.conf snippet: DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 12 NO_BINARY_CHECK = true TIME_FORMAT = %H:%M:%S.%3N category = Custom pulldown_type = 1   Do I need to be putting a copy of props.conf on HF also? If yes, must I put every props.conf of every source type in HF? Or only under certain specific conditions?
Hi, we have two fields, latency and partition. I want to graph the latency vs. time on one of our machines for each of our partitions. So the resulting panel will have four separate line graphs. The ... See more...
Hi, we have two fields, latency and partition. I want to graph the latency vs. time on one of our machines for each of our partitions. So the resulting panel will have four separate line graphs. The log output we have is:    As you can see, there are 4 different partitions 00-BTECEU, 01-BTECEU, 02-BTECEU, 03-BTECEU. I was able to parse for the partition, but I'm having issues separating the latency for each partition.    Here's the query I tried, but the resulting table is empty:   host=irmnrjeb0227d AND latency | reverse | rex field=name "Partition\s(?&lt;pName&gt;\d\d\W\w+)" | streamstats current=f last(latency) as previouslatency by pName | rename latency as currentLatency | eval delta = currentLatency - previouslatency | timechart span=30s first(delta) as Delta by pName   Any help is appreciated, thanks!  
Hi I have field in my log that call “MobileNumber” that need to show count of MobileNumber by location on map. e.g: 00121234567  Area code:0012 Number:1234567 if area code belong to Berlin 0151,... See more...
Hi I have field in my log that call “MobileNumber” that need to show count of MobileNumber by location on map. e.g: 00121234567  Area code:0012 Number:1234567 if area code belong to Berlin 0151, 0157 or 0173. show total count of area code that belong Berlin on map. if area code belong to Wolfsburg 0361 show total count of area code that belong Wolfsburg on map FYI: Latitude, Longitude not exist in log file.   Any idea? Thanks
We're ingesting Tomcat logs, and looking for items tagged [SEVERE]. I'd like to be able to pull a report of error rate, and look for errors which are occurring at a significantly higher than average ... See more...
We're ingesting Tomcat logs, and looking for items tagged [SEVERE]. I'd like to be able to pull a report of error rate, and look for errors which are occurring at a significantly higher than average rate *for their error type*. In addition, we're getting data streams from multiple hosts, each of which are really their own instance and have their own "native" error rate.  I need the average rate of occurrence of errors over the last week, and over the last day, grouped by host and error type. Then I need to flag any error who's rate has risen by... say, 500%.  So far the best I've come up with is this:  index="tomcat" sourcetype="catalina.out_logs" SEVERE earliest=-7d latest=-1d | rex field=_raw "\[SEVERE[\s\t]+\][\s\t]+(?<err>.+)[\n\r]" | eval errhost = host + "::" + err | bucket _time span=1h | stats count as tcount by errhost | stats avg(tcount) as wk_avg by errhost | appendcols [search index="tomcat" sourcetype="catalina.out_logs" SEVERE earliest=-1d latest=now() | rex field=_raw "\[SEVERE[\s\t]+\][\s\t]+(?<err>.+)[\n\r]" | eval errhost = host + "::" + err | bucket _time span=1h | stats count as tcount by errhost | stats avg(tcount) as day_avg by errhost] | eval perc= round(((day_avg - wk_avg) * 100 / wk_avg),2) | fields + perc errhost | search perc > 500.0 | search NOT "Unable to serve form" | sort perc desc   So - I'm pulling SEVERE errors, extracting just the error text, CONCATing that to the host to get my group-by string, bucketing in 1-hour increments to get an average, then building a chart with the 7-day average and the 1-day average for each host/error pair.  Wondering if anyone else has a better way to do it? Thanks!
Hello Splunk SOAR User! We’re happy to announce that Splunk SOAR 5.0.1 on-prem is here and with it an all-new visual playbook editor! Here’s the catch: you’ll need to upgrade to Splunk SOAR 4.6 ... See more...
Hello Splunk SOAR User! We’re happy to announce that Splunk SOAR 5.0.1 on-prem is here and with it an all-new visual playbook editor! Here’s the catch: you’ll need to upgrade to Splunk SOAR 4.6 or later to use this feature ,  as we will no longer be supporting any earlier versions.  What’s New? This new, modern visual playbook editor makes it easier than ever to create, edit, implement and scale automated playbooks to help you eliminate grunt work and respond to incidents at machine speed. The improvements we’ve made to the new editor have been centered on ensuring that anyone can automate — regardless of your comfort level with coding.  Here are some highlights on what this modern virtual playbook editor delivers: Improved readability: wider blocks, labels on lines, UI-based configuration  Vertical playbook orientation  New options for creating playbook blocks with drag-and-drop, mini-menu, and keyboard shortcuts  Definable playbook inputs and outputs  Ability to create smaller, micro-playbooks For those of you who'd like to stick to the original way of building playbooks, never fear! You still have the option to build and edit playbooks in the classic visual playbook editor.  Important update on supported version: With the release of 5.0.1, the criteria have been met for the end of service of all Phantom versions <= 4.6. For more details on Splunk support and end of support criteria, click here.  Happy Automating! Team Splunk