All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I want to use the eval search command but i have a little problem.         index=* host="*" source="/applis" legs{}.status=* | eval error = if(legs{}.status == 200, "OK", "Problem") | ch... See more...
Hello, I want to use the eval search command but i have a little problem.         index=* host="*" source="/applis" legs{}.status=* | eval error = if(legs{}.status == 200, "OK", "Problem") | chart count by error         When i use legs{}.status field it don't work, i think it's cause of my quotes. Someone who know how to use that field ? Thank you and sorry for my bad english
Hi Community,  I'm trying to optimize an existing query to only return values only if a condition is met.  The existing query: source="/var/log/wireless.log" AnyConnect OR NetworkDeviceName=fw* "N... See more...
Hi Community,  I'm trying to optimize an existing query to only return values only if a condition is met.  The existing query: source="/var/log/wireless.log" AnyConnect OR NetworkDeviceName=fw* "NOTICE Passed-Authentication: Authentication succeeded" | stats values(Calling_Station_ID) as Public_IP by UserName | where mvcount(Public_IP) > 1 The output looks something like this:  Username Public IP test6849@domain.com 127.229.3.176 127.89.234.34 Example678 127.122.158.253 127.122.181.170 example5645@domain.com 127.96.171.82 127.13.146.208 Example123 127.114.242.14 127.114.243.135 127.114.252.31 test123@domain.com 127.157.205.179 127.157.211.18 Example586 127.94.41.110 127.114.213.249   What I'm trying to achieve is only to have this return IF the Public IP address subnets differ.  As an example values I want returned:  test6849@domain.com xx.229.3.176 xy.89.234.34 AND not these values - Notice the first 2 subnets are the same (underlined) Username Public IP Example678 xyz.122.158.253 xyz.122.181.170 Example123 127.114.242.14 127.114.243.135 127.114.252.31 I managed to identify the first 2 subnets with regex - but I'm unable to get my query to return values.  source="/var/log/wireless.log" AnyConnect OR NetworkDeviceName=fw* "NOTICE Passed-Authentication: Authentication succeeded" | stats values(Calling_Station_ID) as Public_IP by UserName | stats values(Public_IP_octet) as Subnet_count by UserName | where (mvcount(Public_IP) > 1 AND mvcount(Subnet_count) < 2) Any help would be appreciated 
Hi everyone, so one of our SHC member will be moved to another datacenter, as such the IP assigned will be different. It is a 3 SHC with 1 Deployer, without IDX Clustering/MN. Currently the setup of... See more...
Hi everyone, so one of our SHC member will be moved to another datacenter, as such the IP assigned will be different. It is a 3 SHC with 1 Deployer, without IDX Clustering/MN. Currently the setup of the SHC is with IP. I would like to know of the implications/complexity of changing the IP of one of the SHC members. Is it possible to do this just by reinitializing the SHC member? Are there any official/unofficial Splunk guides in doing this? Thanks.
Hello Splunkers, We need to fetch events from Netscaler devices. After investigation I found that Netscaler can be configured to send events in syslogs format. I had read somewhere inorder to fetc... See more...
Hello Splunkers, We need to fetch events from Netscaler devices. After investigation I found that Netscaler can be configured to send events in syslogs format. I had read somewhere inorder to fetch such events it is better that we use one syslog server and install UF to monitor and send events written in syslog server. However I saw their is an add-on named as "Splunk add-on for Netscaler Citrix", if we use this add on at our HF or indexer and Search Head, and send events directly from Netscaler device to indexer/HF, will it work effectively and will it be reliable? Or do we need syslog server in any case (Installing UF on top of it on syslog server)??? And if we send events directly from Netscaler to Indexer, I guess we may not need UF anywhere (Does add on helps us here in any way)?? What should be approach here? In documentation I saw we can install add on on UF as well but what is it for??? PS: I'm new with Netscaler aspects and quiet beginner with Splunk as well.
Hello everybody, using Splunk 8.1.0 and relaterd to https://docs.splunk.com/Documentation/Splunk/8.1.0/Search/Parsingsearches trying to add comments via ```my_comment``` to search request, but there ... See more...
Hello everybody, using Splunk 8.1.0 and relaterd to https://docs.splunk.com/Documentation/Splunk/8.1.0/Search/Parsingsearches trying to add comments via ```my_comment``` to search request, but there is error. Only `comment("my_comment")` works, but it is not what i need... How to add comment to search request like /* my_comment */ or // my_comment ?
Hi, For testing purpose. I am install Splunk Enterprise and also install Splunk Universal forwarder in same machine (windows 64bit). I am also configured, the Splunk Enterprise receive port is 9997.... See more...
Hi, For testing purpose. I am install Splunk Enterprise and also install Splunk Universal forwarder in same machine (windows 64bit). I am also configured, the Splunk Enterprise receive port is 9997. Configuring the  Universal forwarder using the local IP. In Splunk Enterprise it does not shows my machine logs or source. In my Universal Forwarder > etc > system > local > ( "inputs.conf" file does not shows). I also checked the ports (9997 port is open in my machine). Please help me to solve the issue.            
Hi everyone,   I currently have three dashboards that show the same processes in three states "Ready To Process" , "Processing" and "Complete" How can I create one other dashboard that shows the... See more...
Hi everyone,   I currently have three dashboards that show the same processes in three states "Ready To Process" , "Processing" and "Complete" How can I create one other dashboard that shows the duration it takes from "Processing" to  "Complete" 
I have a couple of questions.   - I have my searchhead and indexer on the same server. I know that my domain controllers and the universal forwarders on it are installed correctly I'm receiving A... See more...
I have a couple of questions.   - I have my searchhead and indexer on the same server. I know that my domain controllers and the universal forwarders on it are installed correctly I'm receiving AD data en securityeventlogs in splunk. However every search with Splunk app for windows infrastructures will came up empty   - beside that is the performance of my splunk instance very slow and now  i have the error messages: the tcp output processor has paused the dataflow forwarding to host_dest= etc...... Now it seems that i'm don't receive data anymore. It happened after i putted an input.conf in de app for windows infrastructure local folder. After I removed it i still don't receive any data.   I'm stuck now. What can i do to fix - the performance issues - that i don't receive data anymore - that the app for windows infrastructure will be working ( i really followed every step and all was checked green @ the setup). THanks for helping me
Hi all, I need some help in creating a new field, I have a field like following Field 1 AABBCCDDEEFF AAAABBBBCCCC   Id like to make a new field and the values become : AA-BB-CC-D... See more...
Hi all, I need some help in creating a new field, I have a field like following Field 1 AABBCCDDEEFF AAAABBBBCCCC   Id like to make a new field and the values become : AA-BB-CC-DD-EE-FF AA-AA-BB-BB-CC-CC   could someone help me with this? Thanks in advance!
We need to setup an alert whenever there are pending buckets  i.e. there are fix up tasks pending in the cluster. Require the query to do that.
I have seen some information about load balancing within the outputs.conf file. And some regarding LB-side configuration (using nginx). The scenario I have in mind, is where either: a) All indexers... See more...
I have seen some information about load balancing within the outputs.conf file. And some regarding LB-side configuration (using nginx). The scenario I have in mind, is where either: a) All indexers are behing one address, say 'splunk-indexers', this as a round robin configuration, or alternatively 'least connections' b) Each indexer is behind its own address, but it's still using LB/VIPs. Are there a good amount of client/splunkforwarder-side options here? I am not sure how it usually balances between listed indexing servers. I hope this isn't terribly explained.. The virtual IPs are pointing to either one or more indexing servers, depending on how we configure it.
Hi Everyone, My splunk instance has been migrated. When I am searching for the index I am getting Below ERRORS: I FOUND THESE LOOKUPS IN MY AUTOMATIC LOOKUPS.  I checked the permission. Its GLOBAL ... See more...
Hi Everyone, My splunk instance has been migrated. When I am searching for the index I am getting Below ERRORS: I FOUND THESE LOOKUPS IN MY AUTOMATIC LOOKUPS.  I checked the permission. Its GLOBAL only . How can I remove the these Errors. Can someone guide me on this. index="ABC" [hvidltwa13] Could not load lookup=LOOKUP-SFDC-DASHBOARD1 [hvidltwa13] Could not load lookup=LOOKUP-SFDC-REPORT1 [hvidltwa13] Could not load lookup=LOOKUP-SFDC-USER_AGENT [hvidltwa13] Could not load lookup=LOOKUP-SFDC-USER_NAME [hvidltwa13] Could not load lookup=LOOKUP-SFDC-USER_NAME1
Hi Folks, I need your help in fetching latest event from a particular field. Sharing you a sample event  and query when I execute for last 15 mins. Query -> index=Blah sourcetype=blah_blah* Examp... See more...
Hi Folks, I need your help in fetching latest event from a particular field. Sharing you a sample event  and query when I execute for last 15 mins. Query -> index=Blah sourcetype=blah_blah* Example event :- 2020-11-02 05:35:00.319, SOURCE="Tullett", COUNTVOL="879", TO_CHAR(SNAPTIME,'MM/DD/YYHH24:MI:SS')="08/31/20 00:59:00"   Initial date on this event seems to be OK which is todays date"2020-11-02 05:35:00.319", but date at the end which is field SNAPTIME_NEW seems to be old "08/31/20 00:59:00". Can you please help me with a query so that I see only latest events in a sorted manner by date in field SNAPTIME_NEW when I execute query for last 15 mins.    Screenshot attached.   Thanks,  Prateek
Hi, How to find whether a field is extracted at index time (or) search time?
so I have some data that comes in via a TCP input. I want to quickly run a specific search but it requires me to have the data formated a bit different. I think the tables below will help describe wh... See more...
so I have some data that comes in via a TCP input. I want to quickly run a specific search but it requires me to have the data formated a bit different. I think the tables below will help describe what I am looking to do because I am unable to describe it very well.     Event 1 Name Value metric_name Value A 1 B 0 C 0 D 0 I 274 Event 2 Name Value mertic_name Value A 2 B 2 C 2 D 2 I 344   What I want to have is a new field for each of the Names and then every new Value is "appended" to that event.  A B C D I 1 0 0 0 274 2 2 2 2 344
I would like to get response time(95 percentile), error count and transaction per second in one graph timechart. This way I can check all details like at what point of tps the response time increases... See more...
I would like to get response time(95 percentile), error count and transaction per second in one graph timechart. This way I can check all details like at what point of tps the response time increases, error count increases etc.     currently i get the tps as follows   search all | eval count=1 | timechart per_second(count) as transactions_per_second   error count search -  NOT status="200" response time - p95(Response_Time)  
Hi All, I got a bunch of logs, from which I would like get some business values. Using with or without MLTK.  I would like to create some dashboards from these 100k log events.  - some interesting... See more...
Hi All, I got a bunch of logs, from which I would like get some business values. Using with or without MLTK.  I would like to create some dashboards from these 100k log events.  - some interesting fields, field values, etc - the most famous, least famous patterns, etc - some good transactions (longest/shortest, etc)   I read some use cases of MLTK, but, being a newbie to MLTK, i could not get something out of it. Searching on google also. Thanks for any suggestion/printers/views, anything.    Best Regards, Sekar
Hi, How to write transforms.conf for the fields that are not present in metadata For example, I need to write transforms for the field - asset_env asset_env = PROD Below transforms were not worki... See more...
Hi, How to write transforms.conf for the fields that are not present in metadata For example, I need to write transforms for the field - asset_env asset_env = PROD Below transforms were not working. [change_index_name] SOURCE_KEY = field:asset_env REGEX = ^asset_env::(\w+) DEST_KEY = _MetaData:Index FORMAT = index_$1 ~ ~
Hi All, So, there are no apps on splunkbase for "Goanywhere App", which is a "File Transfer Mobile App" from their website: GoAnywhere File Transfer Mobile App Secure file management in the Bring ... See more...
Hi All, So, there are no apps on splunkbase for "Goanywhere App", which is a "File Transfer Mobile App" from their website: GoAnywhere File Transfer Mobile App Secure file management in the Bring Your Own Device (BYOD) workplace is now possible with the GoAnywhere File Transfer mobile app. the logs are being sent to HF using syslog.    so, i would like to ask you, if you have done some data ingestion manually, - are there any special things to note down while doing manual data ingestion, - which things you will be interested to ingest from a file-transfer-app - any security related things to look for inside these GoAnywhere logs(as its a file transfer app, the users would be connecting from external network to corp network) any pointers/suggestions/views please. thanks!    Best Regards, Sekar
I have a table below in splunk. I'm trying the create a line graph which would graph four lines. The X axis would be the bypass value. The Y axis would be the the 50th percentile and 80th percentile ... See more...
I have a table below in splunk. I'm trying the create a line graph which would graph four lines. The X axis would be the bypass value. The Y axis would be the the 50th percentile and 80th percentile of Type3 and and the 50th and 80th percentile of Type4. But there doesn't seem to be a way to do a stats, eval, if statement.