All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi i have log file like below need to extact the section after first "]" to "[" or "." or ":" 2020-04-24 23:59:59,511 INFO ABCD.InIT-Service-1234567 [SrvListener] Receive Message[123456789ABCD... See more...
hi i have log file like below need to extact the section after first "]" to "[" or "." or ":" 2020-04-24 23:59:59,511 INFO ABCD.InIT-Service-1234567 [SrvListener] Receive Message[123456789ABCD123E123456789*] from [Service.APP] 2020-04-24 23:59:57,055 INFO ABCD.InIT-Service-1234567_EFGH.InIT-AppService-5764693 [AbcEndpointManager] Send Message [123456789ABCD123456789123456789*] to A[000] B[0000] 2020-04-24 23:59:59,081 INFO ABCD.InIT-Host-1234567_EFGH.InIT-Service-1234567 [TopologyProcessorService] Message Processed: A[000] B[0000] 2020-04-24 23:29:59,844 INFO ABCD.InIT-Service-1234567 [NetworkProcessor] NetworkProcessor Accomplished: A[000] B[0000] 2020-04-24 23:29:59,851 INFO NAME-1234567 [ExecuteService] CustomeService_clusterCustomeCommand chain was done. Define Parameters[input0='00000',input1='000000'] expected value: Receive Message Send Message Message Processed NetworkProcessor Accomplished CustomeService_clusterCustomeCommand chain was done Thanks
My Dashboard is generating a table with 50+ columns. Based on the field name 'LogLevel', i need to change the font color for that particular row only. In case, LogLevel is 'ERROR' change the font col... See more...
My Dashboard is generating a table with 50+ columns. Based on the field name 'LogLevel', i need to change the font color for that particular row only. In case, LogLevel is 'ERROR' change the font color to red for that row. I would like this to be done using simple XML. In case of java script, i need to place the js file in Splunk server which is a big process. Please suggest
Hello All, I have the below string 2020-04-24 23:14:47,422 INFO http-8080-1 com.pscu.dxsimple.raApp - Response (Success:true)-(Validation:true)-(F_TAG:1402)-(CLIENT_ID:2113)-(Total_TT:4046ms)-... See more...
Hello All, I have the below string 2020-04-24 23:14:47,422 INFO http-8080-1 com.pscu.dxsimple.raApp - Response (Success:true)-(Validation:true)-(F_TAG:1402)-(CLIENT_ID:2113)-(Total_TT:4046ms)-(AppServer_TT:3419ms) I need to extract the key value pairs that are "(Success:true)-(Validation:true)-(F_TAG:1402)-(CLIENT_ID:2113)-(Total_TT:4046ms)-(AppServer_TT:3419ms)" as specific fields. I used index=testindex source="tomcat.txt"| extract pairdelim="\"{-}" kvdelim=":" I am able to see the keyvalue being generated but i need to have this data persistant and saved permanently. How can i do this? Please help
Trying to expand a non-admin user's permissions in Splunk to see data inputs (solved with edit_monitor permissions) but cannot see inputs for Google sheets, etc. What are the permissions that are nee... See more...
Trying to expand a non-admin user's permissions in Splunk to see data inputs (solved with edit_monitor permissions) but cannot see inputs for Google sheets, etc. What are the permissions that are needed on a Splunk user profile to gain access to the Google import/export data inputs?
I've combed through many similar questions, but no resolutions thus far have worked for me: Issue: - unable to get past http://servername:8000/en-US/app/splunk_app_db_connect/ftr#/welcome - U... See more...
I've combed through many similar questions, but no resolutions thus far have worked for me: Issue: - unable to get past http://servername:8000/en-US/app/splunk_app_db_connect/ftr#/welcome - Upon first launch of the App, an error briefly pops up: "Cannot communicate with task server, please check your settings" - In output of: "./splunk cmd splunkd print-modinput-config --debug server" I see: =================================================== Found scheme="server". Locating script for scheme="server"... Found script "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" to handle scheme "server". Introspecting scheme=server: /usr/lib/jvm/java-11-openjdk-amd64/bin/java: symbol lookup error: /usr/lib/jvm/java-11-openjdk-amd64/bin/java: undefined symbol: JLI_InitArgProcessing Introspecting scheme=server: script running failed (exited with code 127). Unable to initialize modular input "server" defined in the app "splunk_app_db_connect": Introspecting scheme=server: script running failed (exited with code 127).. What I've tried thus far: - manually uninstalled and reinstalled the App, and restarting splunk - added: /etc/ld.so.conf.d/java.conf -- With the single-line contents (added to new Docker image): /usr/lib/jvm/java-11-openjdk-amd64/lib/jli System: Splunk 8.0.3, dockerized on Debian 10 Open JDK 11 DB Connect 3.3.0 JAVA_HOME is set to: /usr/lib/jvm/java-11-openjdk-amd64/
I have two searches which I am running by joining with appendcols and passed the result of subquery to main query. index="index" sourcetype="aws:cloudwatch" source="source" account_id="account" ... See more...
I have two searches which I am running by joining with appendcols and passed the result of subquery to main query. index="index" sourcetype="aws:cloudwatch" source="source" account_id="account" metric_name="numberofmessages" CORS_Value>"1" | eval numberofmessages=CORS_Value/5 | rename metric_dimensions as queue_names | table queue_names numberofmessages | appendcols [ search index="index" sourcetype="aws:cloudwatch" source="source" account_id="account" metric_name="ageofmessages" Sum>0 | rename Sum AS TimeinQueue | table TimeinQueue] | dedup queue_names Problem with this is , main query provides the result of itself even if subquery did not produce any result. Basically I want main query to be run only if subquery satisfies the condition. Can someone assist with this please?
Hello there, I'm have a search that get the events atributed to "N" number of users, and I would like to compare the total amount of today's events to whe weeks median (not average). My base searc... See more...
Hello there, I'm have a search that get the events atributed to "N" number of users, and I would like to compare the total amount of today's events to whe weeks median (not average). My base search looks something like this: index=myindex earliest=-w@d | timechart span=1d count(events) by user limit=0 Which gives me this output: _time user1 user2 userN "day1" 1 1 4 "day2" 2 5 2 "day3" 6 7 7 . . "today" 3 8 6 I'll like to compare "today" total events with the median of the week (day 1 through today) for each user, returning the users that report 50% over or under the median. I managed to do this with join, since couldn't get it done with timechart/timewrap, but the search is really slow: index=myindex earliest=@d | stats count(events) as today_totals by user | join user [search index=myindex earliest=-w@d | bucket span=1d _time | stats count by _time user | stats median(count) as median_user | where today_totals>(median_user/0.5) OR today_totals<(median_user*0.5) Any way to do this without join? Thanks
This is a follow up to my "simplifying a (field extraction error) dashboard?" question earlier today, and the new question is: How do I structure my base and post-process searches to produce singl... See more...
This is a follow up to my "simplifying a (field extraction error) dashboard?" question earlier today, and the new question is: How do I structure my base and post-process searches to produce single value visualizations for the three calculated stats values: (1) count of events with errors, (2) w/o errors, and (3) the total - in addition to this? sourcetype="tomcat:vantage" | eval "Field Extraction Error(s)" = if(isnull(message),"1","0") | stats sparkline count by "Field Extraction Error(s)" In other words the result should be something like this, where the single value visualizations are produced using base and post-process searches, as opposed to separate ones: Thanks!
so in this search the full list is everything in zone A. do is everything in zone b, zoneserialnumbers are a list of serial numbers. the goal is to say everything in do is in both zoneserial an... See more...
so in this search the full list is everything in zone A. do is everything in zone b, zoneserialnumbers are a list of serial numbers. the goal is to say everything in do is in both zoneserial and fulla_list | inputlookup fullA_list | join [ | inputlookup zonesserialnumbers ] | join SerialNumber type=outer [ | inputlookup do | rename serialNo as SerialNumber ] | search in_do=yes Location=$location$ | fillnull value="No" in_do | fillnull value="No" Owned | fillnull value="No" In_full | chart count by Owned CSV details do deviceName | in_do | location | serialNo | uuid | Yes | AAAA-AAA |AAAAAAAAAAA | AKAAIFKA112844921-129892184-19129 zoneserialnumbers csv ProductName | SerialNumber | Owned Numbers and letters | AAAAA1234AAAA | Yes FullA_list csv ComputerName | In_A_list | Location | SerialNumber | UDID AAAAA-AAAAA | Yes | Country | AAAA1234AAA | AAAAAAAAAAAAAAAA-1234-AAAAAAAA
Right now this is displaying what I want but how can I return a row for each hour of the day when my alert is scheduled? index=records "ProcessRec: Total Recd" | eval fields=split(_raw,"|") | ev... See more...
Right now this is displaying what I want but how can I return a row for each hour of the day when my alert is scheduled? index=records "ProcessRec: Total Recd" | eval fields=split(_raw,"|") | eval Machine=mvindex(fields,4) | stats count(eval(Machine="SERVER1")) AS "SERVER1" count(eval(Machine="SERVER2")) AS "SERVER2" | addtotals | foreach "SERVER1", "SERVER2" [| eval "<<FIELD>> %"=round((<<FIELD>>/Total)*100,2)]
Hi All, I'm trying to use the Microsoft Azure Add-on for Splunk and was successful in getting this add-on to ingest Azure AD User data via the supplied input. When trying to use the Azure AD Sign-... See more...
Hi All, I'm trying to use the Microsoft Azure Add-on for Splunk and was successful in getting this add-on to ingest Azure AD User data via the supplied input. When trying to use the Azure AD Sign-in input; I'm not getting any data and I'm seeing the following error when looking in the logs. index="_internal" host=xxxx source="/opt/splunk/var/log/splunk/ta_ms_aad_MS_AAD_signins.log" Returns the following error: 2020-04-24 15:07:53,551 ERROR pid=19474 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_signins.py", line 84, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_MS_AAD_signins.py", line 62, in collect_events query_date = get_start_date(helper, check_point_key) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_MS_AAD_signins.py", line 37, in get_start_date d = helper.get_check_point(check_point_key) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/modinput_wrapper/base_modinput.py", line 518, in get_check_point self._init_ckpt() File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/modinput_wrapper/base_modinput.py", line 509, in _init_ckpt scheme=dscheme, host=dhost, port=dport) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/solnlib/modular_input/checkpointer.py", line 166, in __init__ scheme, host, port, **context) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/solnlib/utils.py", line 167, in wrapper raise last_ex HTTPError: HTTP 402 Payment Required -- Requires license feature='KVStore' About this setup: The add-on is running on a Heavy Forwarder and this forwarder is in the forwarder license group; forwarding to Splunk Cloud. I've double checked all the permissions that the registered app needs in Azure and I think I'm good there. This same registered app is in use with the legacy Microsoft Azure Active Directory Add-on to pull sign-in and audit logs today. The permissions I've granted the registered app are here: Thoughts on what may be going on here? Thanks!!
My problem is nearly identical to the issue listed in this past post (https://answers.splunk.com/answers/508577/pivot-not-showing-results-even-though-sampling-the.html) (not enough Karma to post link... See more...
My problem is nearly identical to the issue listed in this past post (https://answers.splunk.com/answers/508577/pivot-not-showing-results-even-though-sampling-the.html) (not enough Karma to post links yet). While my search returns millions of results and I have integrated this search into my dataset, I cannot seem to get the pivot-editor to show any matching events. "Sampling" my search confirms that it is valid. The default column values setting is "Count of events" basically, but it says that there are 0 matches. I can see the server processing my search as it says "0 of 950,000 events matched" etc. etc. until it hits my 2.6 million odd records and simply states "0 events before CURRENT DATETIME" I have tried changing which column it is using in column values to no avail. I have set the dataset permissions to "Global" and the lookup table it uses to "Global" as well. I am wondering if the presence of a lookup table in my search is contributing to this problem. Any help is greatly appreciated and I will provide additional details / samples if required. Only started using Splunk last week so forgive any ignorance on my behalf. Edit: Sample Data Update: When I edit the fields in the data set inside of my data model it returns "Values" but no "Events". Given the sample data, why is that? Here is a table that is the result of joining Rapid7's forward DNS data (the first 10 .com domains in their file) with MaxMind's GeoLite 2 ASN data file: Domain Name IP Address ASN Range ASN ASN Organization 0.220.165.83.static.reverse-mundo-r.com 83.165.220.0 83.165.0.0/16 12334 R Cable y Telecable Telecomunicaciones, S.A.U. 0.220.178.107.bc.googleusercontent.com 107.178.220.0 107.178.192.0/18 15169 GOOGLE 0.220.178.170-dedicated.multacom.com 204.13.152.7 204.13.152.0/22 35916 MULTA-ASN1 0.220.184.35.bc.googleusercontent.com 35.184.220.0 35.184.0.0/13 15169 GOOGLE 0.220.154.104.bc.googleusercontent.com 104.154.220.0 104.154.0.0/15 15169 GOOGLE 0.220.155.104.bc.googleusercontent.com 104.155.220.0 104.154.0.0/15 15169 GOOGLE 0.220.170.108.bc.googleusercontent.com 108.170.220.0 108.170.192.0/18 15169 GOOGLE 0.220.125.34.bc.googleusercontent.com 34.125.220.0 34.125.0.0/16 15169 GOOGLE 0.220.144.82.colo.static.dcvolia.com 82.144.220.0 82.144.192.0/19 25229 Volia 0.220.124.190-isp.enetworksgy.com 190.124.220.0 190.124.220.0/22 52253 E-Networks Inc. I want to pivot this table on the ASN value first, then other values in other reports. Obviously this sample data is only 10 lines long, my production data has several million lines based on the domain names I have specified. Here is the search that yields this information: index="top_10_com_dns" | lookup ASNs network as value OUTPUT network as network, autonomous_system_number as asn, autonomous_system_organization as asn_org | table name, value, network, asn, asn_org I am using the MaxMind GeoLite 2 ASN data (https://dev.maxmind.com/geoip/geoip2/geolite2/) as a lookup table and checking the IP address from the Rapid7 DNS against the ASN ranges to establish which ASN it is a part of. I can provide samples with formatting of that data if required. When I put the above data into a pivot it comes back with "0 of 0 events before CURRENT DATETIME" I hope this sample data makes my problem more clear. Thanks again!
I wanted to ask if anyone knew what this Account_Name "-" is ? I am seeing it in my attempted logins for disabled accounts but I am not sure what this is for.
Many of the forwarders here go down when the servers go for maintenance work. What can go wrong with the forwarders when we don't shut them down cleanly?
I need to create an alert that's more intelligent and based on a baseline. I have a search that produces the following dataset in run anywhere spl: |makeresults 1 | eval service="placeOrder", week... See more...
I need to create an alert that's more intelligent and based on a baseline. I have a search that produces the following dataset in run anywhere spl: |makeresults 1 | eval service="placeOrder", week=1, Volume=100, VolumeMed=100, VolumeLowerBound=28.75, VolumeIQR=47.5, VolumeUpperBound=175.25, VolumeOutlier=0, SuccessRate=80, FailureRate=20, RespTimeMed=500 | append [|makeresults 1 | eval service="placeOrder", week=2, Volume=95, VolumeMed=100, VolumeLowerBound=28.75, VolumeIQR=47.5, VolumeUpperBound=175.25, VolumeOutlier=0, SuccessRate=10, FailureRate=90, RespTimeMed=11400] | append [|makeresults 1 | eval service="placeOrder", week=3, Volume=105, VolumeMed=100, VolumeLowerBound=28.75, VolumeIQR=47.5, VolumeUpperBound=175.25, VolumeOutlier=0, SuccessRate=85, FailureRate=15, RespTimeMed=450] | append [|makeresults 1 | eval service="placeOrder", week=4, Volume=100, VolumeMed=100, VolumeLowerBound=28.75, VolumeIQR=47.5, VolumeUpperBound=175.25, VolumeOutlier=0, SuccessRate=75, FailureRate=25, RespTimeMed=550] | append [|makeresults 1 | eval service="placeOrder", week=5, Volume=15, VolumeMed=100, VolumeLowerBound=28.75, VolumeIQR=47.5, VolumeUpperBound=175.25, VolumeOutlier=1, SuccessRate=75, FailureRate=25, RespTimeMed=450] | fields service, week, Volume, VolumeMed, VolumeLowerBound, VolumeIQR, VolumeUpperBound, VolumeOutlier, SuccessRate, FailureRate, RespTimeMed There are 5 total KPIs: Volume, SuccessRate, WarningRate, FailureRate, ResponseTime. I want to remove outliers prior to calculating the standard deviation but the problem is a single row may only contain an outlier in one of these KPIs, so the row must stay. In this case, each KPI outlier is on a different row, so i need the StDev of Volume to exclude week 5, and I need the StDev of SuccessRate to exclude week 2, but week 2 must still be included for the Volume StDev because it is not a Volume outlier, only a SuccessRate outlier. I tried adding this to the end of the search: | stats stdev(Volume), stdev(case(VolumeOutlier=0, Volume)) as VolumeStdDev by service But, this is the result: service stdev(Volume) VolumeStdDev placeOrder 38.1772 The column where I try to limit what rows it uses is blank. VolumeStdDev should be 4.0824. How can I achieve excluding certain outlier rows of one KPI from a StDev calculation but whose row must remain to be included in StDev where it is not an outlier for those other KPIs?
Possible to use the results of the same search in multiple panels on the same dashboard, and with different visualizations for them? (By the "same search" I mean: run it once, present results in seve... See more...
Possible to use the results of the same search in multiple panels on the same dashboard, and with different visualizations for them? (By the "same search" I mean: run it once, present results in several places via different means.) Reason: make it faster, use less resources. Example: All four panels of the above dashboard use basically the same search that checks if a field message was extracted, and reports the stats highlighting the number of events where that field is not present. Notes: - Field message should be present in all events; if it's not - it's a field extraction error. - The error is not necessarily the result of a bad field extraction regex - it could also be the result of a malformed event, event breaking too soon, etc. - The top right panel is all that is needed - yet the other panels do help - I'd like to keep them there - although not at the expense of running multiple redundant searches. The search: sourcetype="some_sourcetype" | eval "Field Extraction Error(s)" = if(isnull(message),"present","not present") | stats sparkline count by "Field Extraction Error(s)" Possible? Thanks!
Has anyone tried the Splunk Connect for Zoom app on version 7.2 or lower?
I'm working with some json data that contains 1 field with a list of keys and 1 field with a list of values. These pairs may change event to event, but item 1 in field 1 will always align with item ... See more...
I'm working with some json data that contains 1 field with a list of keys and 1 field with a list of values. These pairs may change event to event, but item 1 in field 1 will always align with item 1 in field 2. So I'd like to join these together so that I get a field name of field1_value1 with the data of field2_value1. A sample of where I am right now in this: | makeresults count=1 | eval event.key="email,user,event_id,state" | eval event.values="user@acme.corp,Jon Smith,1234,Open" | makemv delim="," event.key | makemv delim="," event.values |eval keyjoin=mvzip('event.key','event.values') | mvexpand keyjoin So this will properly join the data into the field keyjoin, but now I have to take out the first value in it to be the field name and the second to be the field value. Any advice? Edit: The desired end state would be the ability to add further search criteria after formatting the data. This is going to drive several panels, so obviously more than that, but if I can get to that stats, then I can go from there. Just need to solve for MISSING SPL HERE | makeresults count=1 | eval event.key="email,user,event_id,state" | eval event.values="user@acme.corp,Jon Smith,1234,Open" | makemv delim="," event.key | makemv delim="," event.values |eval keyjoin=mvzip('event.key','event.values') | mvexpand keyjoin | **MISSING SPL HERE** | stats count by state, user
Frm F5 VPN logs, i can easily determine the VPN duration by using transaction command. The working query for me is : startswith= "New Connection on ip: " Endswith= "session statistics: bytes IN:... See more...
Frm F5 VPN logs, i can easily determine the VPN duration by using transaction command. The working query for me is : startswith= "New Connection on ip: " Endswith= "session statistics: bytes IN:" BUT how can i detect active VapN sessions during last two hours means the users who are connected since two hours and still con ected. One thought is to use eval in endswith that no such event with statistics. But how to write the query ? 2ndly use the stats command where i can say "new connection" AND NOT "session staristics" And use earliest(_time) as "session_start"to get session start time and then use Now() - session_start. Any thoughts ...?
I am using a Python scripted input that needs to read from a text file an encoded username and password. The script then writes the results from an API call to a JSON file. So the pseudocode looks li... See more...
I am using a Python scripted input that needs to read from a text file an encoded username and password. The script then writes the results from an API call to a JSON file. So the pseudocode looks like this: with open('key.txt', 'r') as file: set username and password data = API call with open('json', 'w') as file2: write data My script is throwing an error saying that the key.txt file does not exist when it is definitely in the same directory. Are there any permission issues that could be causing this?