All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am writing a custom search command that is quite performance sensitive. On every invocation the script is called twice and runs up to the prepare() method, which puts an unnecessary strain on the s... See more...
I am writing a custom search command that is quite performance sensitive. On every invocation the script is called twice and runs up to the prepare() method, which puts an unnecessary strain on the system. I could imagine that this is related to the command's general slowness, so the ability to disable it would be nice.
I am building a GeneratingCommand and even in the most basic version a lot of time passes between the invocation of prepare() (called after about 50ms after start) and generate() (called about 170ms ... See more...
I am building a GeneratingCommand and even in the most basic version a lot of time passes between the invocation of prepare() (called after about 50ms after start) and generate() (called about 170ms after start). We plan to invoke this command frequently, so this gap matters. How can I reduce it?
Yes, I am a beginner! I installed Splunk Enterprise Free Trial on Windows. But I wanted to install it on RHEL! Can I install another free trial on RHEL?
The page About non-Python custom search commands mentions that it is possible to write v2 custom search commands in languages other than Python, but there is absolutely no information about how such ... See more...
The page About non-Python custom search commands mentions that it is possible to write v2 custom search commands in languages other than Python, but there is absolutely no information about how such a thing would be implemented. What's the protocol? The closest thing to an explanation of the protocol I've found is NDietrich's GitHub repo, and their accompanying talk which I find rather disappointing. How come there is no official information to be found about it?
Hi all, I am a very new Splunk admin, and am trying to peel back the onion on the previous admin's shenanigans in this Splunk environment. I have a "dashboard" that was created by a user in the "sear... See more...
Hi all, I am a very new Splunk admin, and am trying to peel back the onion on the previous admin's shenanigans in this Splunk environment. I have a "dashboard" that was created by a user in the "search" app, and they have requested that I delete the dashboard for them, as they cannot.  What is the proper way to do this? The only mention I can find of it is on all 3 search head peers under the path "/opt/splunk/etc/apps/search/local/data/ui/views/${dashboard_name}.xml" I cannot find it on the cluster master, either in /etc/apps or /etc/shcluster/apps.  Please help me figure out what to do next.  Is it as simple as just removing that xml from all 3 search heads at the same time?  Thanks in advance. 
Does anyone know when the next version release number will be, and what the timeframe for this will be? I have an offline instance which we are about to start updating, but I don't want to do it ... See more...
Does anyone know when the next version release number will be, and what the timeframe for this will be? I have an offline instance which we are about to start updating, but I don't want to do it unless we have the most up to date version for a while. 
my url look like  https://google.demo.com/sites/demo/support/shared.demo/dump/  I want to regex https://google.demo.com/sites/demo/support/ what rex   
Hai All, Good day, we are using DB connect addon  to pull logs from multiple DB"s and created several inputs we want track activity  like If any user changes anything or creating new inputs in ... See more...
Hai All, Good day, we are using DB connect addon  to pull logs from multiple DB"s and created several inputs we want track activity  like If any user changes anything or creating new inputs in the DB Connect app or disabling inputs    any searches to check the activity for those
My current project polls a device every 15 minutes to pull a counter which is then charted. Thanks to members here, I now have this working as desired. Here is an example search: index=index | whe... See more...
My current project polls a device every 15 minutes to pull a counter which is then charted. Thanks to members here, I now have this working as desired. Here is an example search: index=index | where key="key_01" | timechart span=15m values(value) by mac_address The key "key_01" is a counter that increases over time. If there is no more activity, the key stays at its current value. So over time, we are counting totals. This produces a lovely line chart or bar chart. I would now like to be able to instead display the delta between the values, so instead of showing the accumulated total, we only see "new" counters since the last value - ie the delta. I've been reading posts and playing with the delta command but so far not been able to get it to work.  Here is what I thought I would need: index=index | where key="key_01" | delta key_01 as delta_01 | timechart span=15m values(value) by mac_address I would like to ask if anyone can help with getting the syntax right. As always, any help very much appreciated! NM  
dbxquery allows queries using direct SQL. However, dbxoutput is only possible with output objects defined in db-connect-app. Is there a fundamental reason for limiting this function (must be done on... See more...
dbxquery allows queries using direct SQL. However, dbxoutput is only possible with output objects defined in db-connect-app. Is there a fundamental reason for limiting this function (must be done only through DB-Connect-app)? For example, if it is a security problem, it is simply a matter of granting permission separately. I wonder if there might be some other underlying reason.   Text can be awkward with the help of a translator. please understand.
Hi Splunk Community,   I wondered if there was any way to match a keyword against a string in a lookup.  For example:   I have a lookup containing a field with a string:   items d... See more...
Hi Splunk Community,   I wondered if there was any way to match a keyword against a string in a lookup.  For example:   I have a lookup containing a field with a string:   items description "orange apple banana"  fruit   I have this field in my search results: item "apple"     |makeresults | eval item="apple"     Is there any way I can look-up the lookup above to match "apple" against "orange apple banana" and output "fruit" from the description field? I can achieve the reverse of this with wildcard matching (e.g. "orange apple banana" > *apple*), but haven't been able to find a way to match against a string. Does anyone know if this is possible? Thanks    
In indexer discovery method, Heavy forwarder clear text password not being encrypted after restart. Please help
Hello, I have created and imported a lookup file ex. "hashes.csv" and I have pasted there a list of 500+ hashes. I want to search with index=* to see if I find any of these hashes in _raw field o... See more...
Hello, I have created and imported a lookup file ex. "hashes.csv" and I have pasted there a list of 500+ hashes. I want to search with index=* to see if I find any of these hashes in _raw field of any type of log. Thank you in advance.
Up until a month ago, it was working perfectly, but for the past 2-3 weeks splunk dashboards are not showing any data and the mails we get as alerts are blank, they have no report in them. What is ... See more...
Up until a month ago, it was working perfectly, but for the past 2-3 weeks splunk dashboards are not showing any data and the mails we get as alerts are blank, they have no report in them. What is the possible cause? How to resolve this?
Hi all, I want to extract the following word with rex expression: ABC\qq1234  expected result: qq1234 Please note that the substring needed will always after ABC\. Any help will be appreciated!
Hi guys. I was asking why my reports, using license_usage.log from LicenseMaster, for "LicenseUsage" sometimes do not log the "h" or "s" field (alias from host or from source). Doing so, i can have... See more...
Hi guys. I was asking why my reports, using license_usage.log from LicenseMaster, for "LicenseUsage" sometimes do not log the "h" or "s" field (alias from host or from source). Doing so, i can have a full report for Indexers or Sourcetypes usage, but not for Host or Sources.   INFO LicenseUsage - type=Usage s="" st="MY_ST" h="" o="" idx="MY_IDX" i="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" pool="auto_generated_pool_enterprise" b=51331 poolsz=214748364800   May it be the origin forwarder version? I have forwarders from 6.x.x to 8.x.x version running. Any clue? Thanks.
Happy New Year to all of you. So I have syslog in which we have details of the devices and switches.  The requirement is to find the old and new ip address for the NetworkName which were recentl... See more...
Happy New Year to all of you. So I have syslog in which we have details of the devices and switches.  The requirement is to find the old and new ip address for the NetworkName which were recently added to a group.  To get this i have to follow below steps. 1. get the NetworkName which has been recently added to group. 2. than get the latest CallingStation for the NetworkName . # search for step 1 & 2 index=xyz NetworkGroups="Device Type#All Device Types#DNAC#SingleIONBranch" (Diag_Message="Authentication succeeded") NetworkName =USAZSLKRR01FIF0001 |stats latest(CallingStation ) as CallingStation by NetworkName 3. search in the index with the CallingStation  to get IPAddress(it has to ran for last 24 hours) index=na3rc Calling_Station_ID=B0-22-7A-32-32-26 | bin span=1d _time | stats latest(IPAddress) as IPAddress by _time CallingStation | eval IP=if(_time<relative_time(now(),"@d"),"Old","New") The problem here is that IPAddress field has both old and new IPAddress. I tried join but it is showing no results as it is being maxout and when i try to use it in same search it is only showing new IPAddress. Thank in Advance       index=xyz NetworkGroups="Device Type#All Device Types#DNAC#SingleIONBranch" (Diag_Message="Authentication succeeded") NetworkName=USAZSLKRR01FIF0001 | stats latest(CallingStation) as CallingStation by NetworkName | join CallingStation type=left [| search index=xyz | bin span=1d _time | stats latest(IPAddress) as IPAddress by _time CallingStation | eval IP=if(_time<relative_time(now(),"@d"),"Old","New")]      
Hi Experts,   I would like to compare values in same field (vlan_ids) for equality based on a machine serial (hyp_serial).   Would like validate whether the VLAN ID's are configured on both V... See more...
Hi Experts,   I would like to compare values in same field (vlan_ids) for equality based on a machine serial (hyp_serial).   Would like validate whether the VLAN ID's are configured on both VM's under same hyp_serial are same or not equal.    There will be 2 VM's under the same serial.    Could you please help me with my requirement.   index=lab source=unix_hyp  | spath path=hyp_info{}{} output=LIST  | mvexpand LIST  | spath input=LIST  | where category == "hyp_vlan"  | table hyp_name hyp_serial vlan_ids    Table Output: ------------------ hyp_name     hyp_serial   vlan_ids hyp_vm1 AE12893X    5_767_285_2010 hyp_vm2     AE12893X5_356_375_2010 hyp_vm3    ZX87627J9_49_43_44_3120 hyp_vm4     ZX87627J9_49_43_44_3120 hyp_vm5 YG92412K5_767_285_2010 hyp_vm6 YG92412K 5_767     Expected Output: ----------------- hyp_name     hyp_serial   vlan_ids      VLAN CHECK hyp_vm1 AE12893X    5_767_285_2010    OK hyp_vm2     AE12893X 5_356_375_2010  OK hyp_vm3    ZX87627J 9_49_43_44_3120  OK hyp_vm4     ZX87627J 9_49_43_44_3120  OK  hyp_vm5 YG92412K 5_767_285_2010  MISMATCH hyp_vm6 YG92412K 5_767             MISMATCH
I install UF 8.2.4 for Windows and using default pem and CA certificate, I tried to connect UF to the indexer. However, the eventlog information cannot be sent to indexer with the error  ERROR TcpI... See more...
I install UF 8.2.4 for Windows and using default pem and CA certificate, I tried to connect UF to the indexer. However, the eventlog information cannot be sent to indexer with the error  ERROR TcpInputProc - Error encountered for connection from src=192.168.xx.xxx:65251. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol I search thru the /opt/splunk/var/log/splunk/splunkd.log and not much information can be found. How can I get more detail info to troubleshoot the problem ?  
I am using Splunk 8.2.1 with DB Connect 3.5.1 with OpenJDK 1.8.0.332 on Linux (RHEL), this is an airgapped system so I cannot paste the logs. After installing DB Connect and restarting Splunk, DB C... See more...
I am using Splunk 8.2.1 with DB Connect 3.5.1 with OpenJDK 1.8.0.332 on Linux (RHEL), this is an airgapped system so I cannot paste the logs. After installing DB Connect and restarting Splunk, DB Connect fails to start on both dbxquery.sh and server.sh On both scripts, after TrustManagerUtil - action=load_key_manager_succeed it throws an ExceptionInitializerError for SplunkServiceBuilder.<clinit>(SplunkServiceBuilder.java:19) complaining about: Error setting up SSL socket factory: java.security.NoSuchAlgorithmException: SSL SSLContext not available   In my java.security i h ave:   jdk.disabled.namedCurves = secp256k1 i have commented out (for testing) jdk.certpath.disabledAlgorithms, jdk.jar.disabledAlgorithms, jdk.tls.disabledAlgorithms however I still get this error.   It's the first time i'm seeing this so looking for guidance. Thanks