All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Can someone tell me the difference between setting up a DB collector to monitor the database vs installing a machine agent extension that can poll the V$ table from the database independent... See more...
Hello, Can someone tell me the difference between setting up a DB collector to monitor the database vs installing a machine agent extension that can poll the V$ table from the database independently? Are we likely to see the same metrics? Any advantages/disadvantages of these two approaches? Thanks SN
I'm trying to edit the source code of my dashboard to create a panel that only produces results if a text field input (value of my token called $partner$) is NOT "". If $partner$=, I don't want this ... See more...
I'm trying to edit the source code of my dashboard to create a panel that only produces results if a text field input (value of my token called $partner$) is NOT "". If $partner$=, I don't want this panel to produce anything. By default, I have $partner$=* based on how I've configured my other panels. I've consulted several articles and Splunk documentation and am stumped on what I'm not doing correctly. Thanks you for helping! <search> <done> <condition match="$partner$=*"> <unset token="partner">t1</unset> </condition> <condition match="$partner$!=*"> <set token="partner">t1</set> </condition> </done> <query> [SEARCH QUERY HERE] </query> <earliest>-30d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search>
i Have 2 source types each source type having asset_id field i want a search to display same asset_id that is in both source types from that results i want to display nexpose_tag field for tha... See more...
i Have 2 source types each source type having asset_id field i want a search to display same asset_id that is in both source types from that results i want to display nexpose_tag field for that asset_id from second source type
HI Experts , I am prety sure this has been already answered but I am not able to find the correct answer on the community . I have path as below C:\app1\tomcatlogs1\WNSalesLogs1\WNEngine1\ ser... See more...
HI Experts , I am prety sure this has been already answered but I am not able to find the correct answer on the community . I have path as below C:\app1\tomcatlogs1\WNSalesLogs1\WNEngine1\ server1 server2 server3 I have 8 servers on which same directory structure exist I want to use host_segment so that my host name will be automatically picked up and I only want to index server1 files . So 2 things I want to achieve 1) If I am on host 1 , the host name should be server1 2) Only server1 folder files will get indexed . I tried folloing but it is not indexing my files and not setting up the hostname [monitor://C:\app1\tomcatlogs1\WNSalesLogs1\WNEngine1\*\productengin_*.log] disabled = false host_segment = 5 index = main whitelist = server1 Any suggestion will be highly appricaited Regards VG
We want to be able to see when a Workday user changes their direct deposit account/routing information multiple times. Workday users shouldn't be changing this very often so if it is happening it cou... See more...
We want to be able to see when a Workday user changes their direct deposit account/routing information multiple times. Workday users shouldn't be changing this very often so if it is happening it could be a security breach that we would want to be aware of. Setting an alert when someone attempts to do this multiple times over a short period could thwart efforts to reroute someones payroll.
we have been using Lookup File Editor for a long time. All of a sudden yesterday some customers couldn't access existing lookup files. we thought it was a cache issue. I cleared by cache in Chrome an... See more...
we have been using Lookup File Editor for a long time. All of a sudden yesterday some customers couldn't access existing lookup files. we thought it was a cache issue. I cleared by cache in Chrome and now i cannot access any of the lookup files. Under apps, i select Lookup Editor - all the files (.csv) are listed but when i click on one of the file, everything disappears and i only see the headers Splunk>enterprise App: Lookup Editor Lookups New Lookup That is all - everything below is blank. We have not upgraded the Lookup file editor or anything else. The same thing is happening in our Dev Splunk SH as well. I upgraded that last week but it is occurring on that SH as well. Any idea why the Lookup File editor wouldn't be working correctly anymore? Any help is appreciated.
Quick background: I'm looking for SSO logins by users that have authenticated via NTLM. Issue: I copied a snippet of text directly from the SSO logs ("NTLMSSP principal: DomainName= UserNam... See more...
Quick background: I'm looking for SSO logins by users that have authenticated via NTLM. Issue: I copied a snippet of text directly from the SSO logs ("NTLMSSP principal: DomainName= UserName") that I thought would be apparent in each event where users are using SSO but authenticating with NTLM. There are 100 fields total that are associated with the SSO index alone (i.e., typing in index=sso and hitting enter); however, when using the string of text indicated above as a search parameter it reduces the available fields from 100 to 23. Since I cannot see the entire list of fields it's causing issues because I need to be able to see the "carrierCode" field, which isn't available when using the "NTLMSSP principal: DomainName= UserName" text string in my search. Is there a way to incorporate another search within my existing search [and subsearch] that would allow all 100 fields to be viewable in the SSO index so that I could select the carrierCode field and capture its numbers? I hope this makes sense. Any help is greatly appreciated. index="sso" sourcetype="ping*" UserName="" Workstation="" "NTLMSSP principal: DomainName= UserName" | fields index,sourcetype,UserName,Workstation | join UserName [search index=msad sourcetype=ActiveDirectory sAMAccountName="*" | stats count by title,description,department,sAMAccountName, | rename sAMAccountName AS UserName | table description,department,UserName,title] | stats count by Workstation,UserName,title,department,description | sort -count
Hello, Here is my dashboard before using the transpose command index=oit_printer_monitoring AND type=Printer | eval timeConvDate=strftime(_time,"%a %m-%d-%Y") | eval timeConvTime=strftime... See more...
Hello, Here is my dashboard before using the transpose command index=oit_printer_monitoring AND type=Printer | eval timeConvDate=strftime(_time,"%a %m-%d-%Y") | eval timeConvTime=strftime(_time,"%H:%M:%S") | table printer, status, timeConvDate, timeConvTime | dedup printer Here is my dashboard After using the transpose command. . index=oit_printer_monitoring AND type=Printer | eval timeConvDate=strftime(_time,"%a %m-%d-%Y") | eval timeConvTime=strftime(_time,"%H:%M:%S") | table printer, status, timeConvDate, timeConvTime | dedup printer | transpose 0 Here is my colorPalette for both. <format type="color" field="status"> <colorPalette type="map">{"toner low":#EC9960,"normal":#4FA484}</colorPalette> </format> 1) How do I make those cells colored pre-transpose remain post-transpose? 2) In the post-transpose dashboard, how would I color the entire column? Printer oix53 status=normal; therefore, oix53 should have a green background. As should Tue 03-03-2020 and 15:28:31. Whereas printer oix58 status=toner low; therefore, oix58, Tue 03-03-2020 and 15:28:31 should have an orange background. UPDATE Another issue with using transpose. It appears that field names either no longer exist; or, there are renamed (columns to rows?) and I cannot find the correct ones. I think figuring out the new(?) field names will resolve the above issue, as well as the new issue below. Meanwhile, here is the new issue with transpose. Using this drill down code in the post transpose works. However, it does not work in the post transpose code. Why? <drilldown> <link target="_blank">/app/search/printertest2?form.printer=$row.printer$</link> </drilldown> Thanks and God bless, Genesius
All, Member of our management team is concerned about a Splunk Forwarder with a number of processes and threads. Curious what's normal ? What might create more threads? Less? Most my servers hav... See more...
All, Member of our management team is concerned about a Splunk Forwarder with a number of processes and threads. Curious what's normal ? What might create more threads? Less? Most my servers have anywhere from 40-49. # ps auwxH|grep splunk|wc -l 43 ps auwwxH | grep -i splunk Tue Mar 3 11:24:51 PST 2020 root 4980 0.0 0.2 362396 166344 ? Sl Feb27 0:24 splunkd -p 8089 restart root 4980 0.0 0.2 362396 166344 ? Sl Feb27 0:00 splunkd -p 8089 restart [...........] root 4980 0.0 0.2 362396 166344 ? Sl Feb27 0:12 splunkd -p 8089 restart root 4980 0.0 0.2 362396 166344 ? Sl Feb27 4:59 splunkd -p 8089 restart root 4980 0.0 0.2 362396 166344 ? Sl Feb27 0:00 splunkd -p 8089 restart root 4986 0.0 0.0 86200 1056 ? Ss Feb27 0:00 [splunkd pid=4980] splunkd -p 8089 restart [process-runner] root 7619 0.0 0.0 103328 916 pts/32 S+ 11:24 0:00 grep -i splunk My knee jerk is that maybe we have we have thread per file, another one reaching back to deployment server and maybe another per scripted input from Splunk_TA_nix?
Hi Splunk Chaps, We are having issues with a data source where events are duplicated in the cluster. Strange that there are duplicate as well as non-duplicate events for few source types. Also... See more...
Hi Splunk Chaps, We are having issues with a data source where events are duplicated in the cluster. Strange that there are duplicate as well as non-duplicate events for few source types. Also looked at index time for those duplicated events and it is exactly same for both events so not sure whether to point to forwarders configuration. Any help would be appreciated. Or let me know How to identify whether duplicates are happening from forwarders configuration or indexers? Thanks Pramodh
I want to order range from low amount of min/hour to high, like this : 1S-1M, 1M-30M, 30M-1H, 1H-2H, 2H-3H, 3H-4H, 4H-5H, 5H-8H, 8H-10H, 10H-15H, 15H-More I use this command; | rangemap fi... See more...
I want to order range from low amount of min/hour to high, like this : 1S-1M, 1M-30M, 30M-1H, 1H-2H, 2H-3H, 3H-4H, 4H-5H, 5H-8H, 8H-10H, 10H-15H, 15H-More I use this command; | rangemap field=duration 1S-1M=0-60 1M-30M=61-1800 30M-1H=1801-3600 1H-2H=3601-7200 2H-3H=7201-10800 3H-4H=10801-14400 4H-5H=14401-18000 5H-8H=18001-28800 8H-10H=28801-36000 10H-15H=36001-54000 15H-More=54001-9999999999 | chart count by range and I get this order range; and I want to order them from 1S-1M, 1M-30M, 30M-1H, ... , 10H-15H, 15H-more.
Is it possible to send Splunk HEC events message part to 3rd party collector/arcsight? Eg... Now it is : Logstash --- SplunkHEC/ HF --- Indexer I want to parse message field in the HEC and s... See more...
Is it possible to send Splunk HEC events message part to 3rd party collector/arcsight? Eg... Now it is : Logstash --- SplunkHEC/ HF --- Indexer I want to parse message field in the HEC and send to arcsight collector before being send to indexers. Is it possible? Kindly help.
Hello Splunkers, I have two fields that correlate. One field is hostname and another field is score. When I try to get an average of the score I get a incorrect value due to it calculating the sco... See more...
Hello Splunkers, I have two fields that correlate. One field is hostname and another field is score. When I try to get an average of the score I get a incorrect value due to it calculating the score field even though the hostname is null and not representing anything. Is there a way to use if(isnull) or any other eval command so if hostname is null, it gives the other field the value of 0? Thanks, Cooper
The disk usage is at 17% and inode usage is at 1%. The error message from Splunk Web says minFreeSpace is 5000 and free space is 85711: 03-03-2020 16:16:48.266 +0000 WARN DiskMon - MinFreeSpace=5... See more...
The disk usage is at 17% and inode usage is at 1%. The error message from Splunk Web says minFreeSpace is 5000 and free space is 85711: 03-03-2020 16:16:48.266 +0000 WARN DiskMon - MinFreeSpace=5000. The diskspace remaining=85711 is less than 2 x minFreeSpace
Hello all, I am indexing database data into Splunk. I am forwarding the data from heavy forwarders to indexers. I have defined host, source, sourcetype and index defined in the DB connect. Now wh... See more...
Hello all, I am indexing database data into Splunk. I am forwarding the data from heavy forwarders to indexers. I have defined host, source, sourcetype and index defined in the DB connect. Now when I try creating calculated fields in Splunk es search head-on this data, I don't see it working whatsoever. eventtypes and tags work but bot a calculated field. the same calculated field works fine if I run it as a query. I have defined the calc field based on the source name. I even created a dummy sourcetype with that name but nothing seems to work. Requesting help in sorting this out.
I have the following set of data within each event: stack_trace: [ [-] { [-] class_name: FOO file_name: BAR line_number: -2 method_name: WALK } ... See more...
I have the following set of data within each event: stack_trace: [ [-] { [-] class_name: FOO file_name: BAR line_number: -2 method_name: WALK } { [-] class_name: FOO2 file_name: BAR2 line_number: 1356 method_name: JUMP } { [-] class_name: FOO file_name: BAR line_number: 808 method_name: SKIP } ] I want to extract only the first method_name within the stack (| spath "stack_trace{}.method_name" | search "stack_trace{}.method_name"=WALK), which can change from event to event. I've tried using mvindex but I'm having no success. Any suggestions would be greatly appreciated.
Hello, I am fairly new to Splunk and was wondering if the eval case function could be used in conjunction with lookup tables. Here is my current problem (if there are other solutions I am open to... See more...
Hello, I am fairly new to Splunk and was wondering if the eval case function could be used in conjunction with lookup tables. Here is my current problem (if there are other solutions I am open to suggestions) I have 2 message types (100 and 200) each having a separate set of debug codes associated with them. So I am using lookup tables to expand the fields based on the message type and its corresponding field definitions. As far as I can tell, I cannot do this: | lookup msg100_debug_codes.csv Code100 as DebugCode | lookup msg200_debug_codes.csv Code200 as DebugCode The second lookup table overrides what is in the first table. Also, the list of debug codes for each message type have overlapping numbers, which is why I cannot use one master lookup table since there could be 2 of the same (key, value) pairs. This is what lead me to the case statement. Can I use Case to direct which lookup table to use? I am not sure if this is possible. Thank you in advance and if I can clarify any details please let me know.
I have Cisco ESA logs coming into Splunk and extractions are working as they are meant to. The logs are sent by syslog and each line of the event appears to be an entry in the index. So that I can co... See more...
I have Cisco ESA logs coming into Splunk and extractions are working as they are meant to. The logs are sent by syslog and each line of the event appears to be an entry in the index. So that I can combine a transaction in ESA I use the following spl query (mid is the common field): Index=foo | transaction mid This will give me the required block for a particular mid (message id) that have come through the ESA device. I can do searches from here. For example: index=foo | transaction mid | search spam_status = positive For a 15 minute search, this is quite quick. However, for a 24 hour block (particularly if I am trying to do some reporting on the number of say spam is negative, positive) the search takes a very very long time. In fact, if I stop the search I end up with 0 results. For example: index=foo | transaction mid | stats coumt by spam_status Is there a way I can do this transaction without waiting an eternity for the search to do something (if it ever finishes)? If I shortcut the search say with: index=foo mid=123456 | transaction mid | stats coumt by spam_status Then the search is fast. However, this depends on knowing the mid which will change every day so os not feasible, especially doing reports from the available logs.
I'm struggling lot to learn dashboard/custom visualisation using Javascript and CSS. Could anyone please share materials or links please.
In my indexer cluster, on the MC under "Indexing>Performance>Indexing Performance: Deployment" I'm noticing that some about half of my indexers show close to 100% across queues (from parsing to index... See more...
In my indexer cluster, on the MC under "Indexing>Performance>Indexing Performance: Deployment" I'm noticing that some about half of my indexers show close to 100% across queues (from parsing to indexing) and about half show less that 20% across queues (Quite a few are at 0% across queues). My question is, why isnt the data being load balanced from the UFs? If some indexers are full, why is data not being sent to the indexers who have low volume in their queues? I keep getting warnings that forwarding destinations have failed, like they're only trying to send to the full indexers. My outputs.conf accounts for all indexers in the cluster, so there must be something else I'm overlooking.