All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I try to get a new internal field "_application" added to certain events. So i added a new field via the _meta to the inputs.conf on the forwarder.     [script:///opt/splunkfo... See more...
Hi Splunkers, I try to get a new internal field "_application" added to certain events. So i added a new field via the _meta to the inputs.conf on the forwarder.     [script:///opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/df_metric.sh] sourcetype = df_metric source = df interval = 300 disabled = 0 index = server_nixeventlog _meta = _application::<application_name>     I also added a new stanza to the fields.conf     [_application] INDEXED = false #* Set to "true" if the field is created at index time. #* Set to "false" for fields extracted at search time. This accounts for the # majority of fields. INDEXED_VALUE = false #* Set to "true" if the value is in the raw text of the event. #* Set to "false" if the value is not in the raw text of the event#.     The fields.conf is deployed to indexer and SH. But i still do not see the event. I tried searching for "_application::<application_name>" "_application=<application_name>" _application::* _application=* Nothing....  Can somebody explain to me where is the Problem?    
Hello All, We are running on the splunk 8.2 and we would like to setup the tsidxWritingLevel to 4 . we have multisite cluster and want to deploy on all the indexer . Should i made the change on the c... See more...
Hello All, We are running on the splunk 8.2 and we would like to setup the tsidxWritingLevel to 4 . we have multisite cluster and want to deploy on all the indexer . Should i made the change on the cluster master (master-app) and push the bundle or i need to login on the individual indexer and change the parameter and restart the same.
Here is a reduced version of my JSON: {    records: [      {        errors: 4        name: name1        plugin: p1        type: type1      }      {        errors: 7        name: name2   ... See more...
Here is a reduced version of my JSON: {    records: [      {        errors: 4        name: name1        plugin: p1        type: type1      }      {        errors: 7        name: name2        plugin: p1        type: type2      }      {        errors: 0        name: name3        plugin: p2        type: type3      }    ]    session: {      document: my_doc      user: me      version: 7.1    } } There are 3 records in records{} so I expect to get 3 events using mvexpand, but I get 6 events. I'm using a similar query I've found in an answer in this community:   | spath | rename records{}.name AS name, records{}.type AS type, records{}.plugin as plugin, records{}.errors as errors | eval x=mvzip(mvzip(mvzip(name,type),plugin),errors) | mvexpand x | eval x=split(x,",") | eval name=mvindex(x,0) | eval type=mvindex(x,1) | eval plugin=mvindex(x,2) | eval errors=mvindex(x,3) | table name, type, plugin, errors     I get 6 rows instead of 3: name type plugin errors name1 type1 p1 4 name2 type2 p1 7 name3 type3 p2 0 name1 type1 p1 4 name2 type2 p1 7 name3 type3 p2 0   Any suggestion how to fix the query to avoid the duplication?  Thanks!
I'm running: Splunk Enterprise 8.2.5 on Windows 2019. 2 indexers in a cluster and a single search head and separate cluster master/license master/deployment server all on windows 2019.  and... See more...
I'm running: Splunk Enterprise 8.2.5 on Windows 2019. 2 indexers in a cluster and a single search head and separate cluster master/license master/deployment server all on windows 2019.  and have installed IT Essentials work version 4.31.1 and created the clustered indexes and enabled the apps I wish to use.  After a few mins the web interface on my single search head grinds to a halt and everything starts running very slowly. Compute on the search head and indexers seems fine and I have 32 cores and 64 GB RAM on each. If I disable all the apps that come with the IT Essentials work package performance returns to normal.  Any ideas on where to look to troubleshoot this? 
Hello, We have a dbinput that pull in data from an Oracle database. Yesterday, there was some problems with our indexer so we lost a bit of data in that time. I know that I can change the rising che... See more...
Hello, We have a dbinput that pull in data from an Oracle database. Yesterday, there was some problems with our indexer so we lost a bit of data in that time. I know that I can change the rising checkpoint value to yesterday but it will reindex data till that point to now. I want to know are there any other way to reindex those missing data in the pass without delete it all? Edit: I try the delete and reindex all by changing the tail_rising_column_checkpoint_value to the epoch time in the pass but when it's start it only start indexing from the point that I refresh please help
Hi Splunkers, this might be a dumb question but I am a bit confused in regard to ITSI licensing. I understand that ITSI requires an ingest of at least 50GB as well as a separate ITSI license in add... See more...
Hi Splunkers, this might be a dumb question but I am a bit confused in regard to ITSI licensing. I understand that ITSI requires an ingest of at least 50GB as well as a separate ITSI license in addition to the Splunk Core license. Does that mean a 50GB Core license and a 0GB ITSI license? Or could you get a 0GB Core license (like you would use on a HF that doesn't ingest anything) and a 50GB ITSI license?  And can both be installed in the same place in the GUI of the license master or is there a special process for the ITSI license?
Hello, I have a lookup on which we have two columns, one with the computer name and the other with the OS version. When I do a search in the windows index via splunk (event logs) I want to us... See more...
Hello, I have a lookup on which we have two columns, one with the computer name and the other with the OS version. When I do a search in the windows index via splunk (event logs) I want to use this lookup to add the OS version in the result In fact, I want to display the information in my lookup in the result field of my index search. Greetings
Hi All,  I have this simple search that shows logins from same SRC IP  to multiple Destination hosts.  Can someone pls explain why does dc(dest_ip) not match the # of values reported by values(dest)... See more...
Hi All,  I have this simple search that shows logins from same SRC IP  to multiple Destination hosts.  Can someone pls explain why does dc(dest_ip) not match the # of values reported by values(dest) in the Results ?  You will notice in the results, that if values(dest) shows 2 hostnames then dc(dest) shows 4 . Should't it be  that if dc(dest_ip) shows 4  then values(dest) should also report 4 unique host names?  What am i missing ?   Thanks   index=xxx source="WinEventLog:Security" EventCode=5140 | stats dc(dest_ip) as dest_count values(dest) values(Account_Name) values(user_first) values(user_last)by Source_Address | rename values(*) as *     Results:  Source_Address dest_count dest Account_name user_first user_last 10.x.x.11 4 server01@domain.com server02@domain.com xxxx xxx xxx 10.x.x.12 4 server01@domain.com server02@domain.com xxxx xx xx 10.x.x.13 2 server03@domain.com xxx xx xx
Hi  below is one of the requirement I have multiple lookuptable example  number  name   lookuptable 1               abc       1stlookuptable   number  name     lookuptable 1               a... See more...
Hi  below is one of the requirement I have multiple lookuptable example  number  name   lookuptable 1               abc       1stlookuptable   number  name     lookuptable 1               abc       2ndlookuptable   number  name    lookuptable 1               dxc       3rdlookuptable   number  name    lookuptable 1               xyz       4thlookuptable   number  name    lookuptable 1               abc       5thlookuptable   requirement is how to build query where name=abc (from above example) to shows below table fields stating abc belong to which lookuptable on run name lookuptable   example out name  lookuptable abc       1stlookuptable                2ndlookuptable                5thlookuptable
Lets assume, I have a linux machine and installed universal forwarder in that. can i improve the performance by changing some parameters in os kernel ?
Hello Splunkers, On many of sites, we are experiencing this Buckets Error.  Does anyone have the same issues? and how can we solve this issue?  I really appreciate about your work will be prov... See more...
Hello Splunkers, On many of sites, we are experiencing this Buckets Error.  Does anyone have the same issues? and how can we solve this issue?  I really appreciate about your work will be provided.    Buckets Root Cause(s): The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=_internal, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=4, small buckets=4 Unhealthy Instances idx3 idx4
Background story: We have some customers using a site to site VPN to reach our corporate networks.  The customer has like 3-4 network prefixes in their environment. I want to check network traffic co... See more...
Background story: We have some customers using a site to site VPN to reach our corporate networks.  The customer has like 3-4 network prefixes in their environment. I want to check network traffic counters to see if the customer networks are sending/receiving any traffic to/from my corporate network.  Please share some suggested searches.  I'm looking for ANY type of network traffic. For example: customer network A 192.168.1.0/24 customer network B 192.168.2.0/24
Splunk data retention period is for 7 days. But i could still see 2 years back data now. I am not sure why?  Can anyone help on this 
I have two queries from the same set of index and app names using different search terms from which I am extracting a set of fields as below: Query1: index=A cf_app_name=B  "search string 1" | r... See more...
I have two queries from the same set of index and app names using different search terms from which I am extracting a set of fields as below: Query1: index=A cf_app_name=B  "search string 1" | rex field=_raw "(?ms)Id: (?P<Id>[^,]+), service: (?P<service>[^,]+), serial: (?P<serial>[^,]+), Type: (?P<Type>[a-zA-Z-]+)" | table serial Id Type service _time Query 2: index=A cf_app_name=B "search string 2" | rex field=_raw "(?ms)serial\\W+(?P<serial>[^\\\\]+)\\W+\\w+\\W+(?P<Type>[^\\\\]+)\\W+\\w+\\W+\\w+\\W+(?P<Id>[a-zA-Z]+-\\d+-\\d+)\\W+\\w+\\W+(?P<gtw>[^\\\\]+)\\W+\\w+\\W+(?P<service>[^\\\\]+)" | table serial Type Id service _time My requirement is to list all the values in Query1 and then show a Y/N flag if there is a match in Query2 based on the field 'Id'. Tried join and append, but do not seem to be getting the right results, any suggestions will be appreciated.
Hello, I'm new working with Splunk and I want to create reports and email notification to me  when  any systems go down. Can any of you help me with any search string for that? Thank you! Thelma
Hello,  I have the following log: Month date time, ip address, host, [system] 2022 194 16:15:14 X01: Freq error: phase start: -13.5 ns, phase end: +4.7 ns I'm trying to create custom fields nam... See more...
Hello,  I have the following log: Month date time, ip address, host, [system] 2022 194 16:15:14 X01: Freq error: phase start: -13.5 ns, phase end: +4.7 ns I'm trying to create custom fields named "Start" and "End" that hold the positive and negative numerical values only, but I am fairly new to field extraction and can't seem to find a way to tie the values to "phase start" and "phase end" without having them included in the field....  
Can we do the event sampling in forwarder before indexing the event in indexer to reduce the event size ?
Hello, I have in the "Network_Traffic.All_Traffic" a Calculated Field called "rule". The Datamodel is accelerated, therefore the eval expression is not editable from Web UI and I cannot see the e... See more...
Hello, I have in the "Network_Traffic.All_Traffic" a Calculated Field called "rule". The Datamodel is accelerated, therefore the eval expression is not editable from Web UI and I cannot see the expression to extract/calculate the field. I tried searching in all the *.conf files but I do not find it, I was expecting to find it on a props.conf I know the workaround is to temporary disable the acceleration, so that the calculated field becomes editable and I can see how it is calculated, but I would like to avoid doing that. Is there any other way to do that OR do you know where the Datamodel Calculated Fields are saved? Thanks a lot, Edoardo
Hi Team, I have a field like below : Cost : 0.4565534553453 0.0000435463466 0.0021345667788 0.0000000005657 I want to get values from this cost field which has value till 4 decimals i.e ... See more...
Hi Team, I have a field like below : Cost : 0.4565534553453 0.0000435463466 0.0021345667788 0.0000000005657 I want to get values from this cost field which has value till 4 decimals i.e only 0.4565534553453 and 0.0021345667788.  How can I achieve this in my splunk query. Please can anyone help me . Regards, NVP
Hi Splunkers, I struggled badly trying to get this solved, but no luck? I need to join to a different search using the ip_address to get the host name : Base search for the join: index= X  sour... See more...
Hi Splunkers, I struggled badly trying to get this solved, but no luck? I need to join to a different search using the ip_address to get the host name : Base search for the join: index= X  sourcetype=server  dv_ir=4311.00. The dv_name field is the host name and the dv_ip_address is the ip_address. Any help will be appreciated. Thank you all!