All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to write a Splunk query to find out a file size below 10 bytes from a log file. I have the index and log location but unable to find the exact query. Please help me out in a writing a quer... See more...
I'm trying to write a Splunk query to find out a file size below 10 bytes from a log file. I have the index and log location but unable to find the exact query. Please help me out in a writing a query and creating an alert out of it. 
I am currently needing to change our single site cluster to a two indexer configuration that is peered for ease of maintenance for the next person who replaces me at this site. I currently have 2 ind... See more...
I am currently needing to change our single site cluster to a two indexer configuration that is peered for ease of maintenance for the next person who replaces me at this site. I currently have 2 indexers, 1 deployment server, 1 cluster/license master and 1 search head. Here is what I need to do. First - Move all Splunk forwarders to use the second indexer as the deployment server without having to reinstall all the forwarders. I thought that his was as simple as changing the deployment.conf file, but it does not seem to be working. Maybe the cluster master has something? Second - Remove the indexers from the cluster and delete the cluster from the Cluster Master and break the distributed search. Third - Change the license server to the first indexer At this point, I can shut down all servers except the indexer and everything will work. If there is anyone that can help me, I would greatly appreciate it. I want to do this as quickly and smoothly as possible. Thank you in advanced. Robert
Hi All, I have the below line of code to categorize transactions based on the response time (duration) taken in seconds. | eval ranges=case(Duration<=1,"less",Duration>1 and Duration<=3,"between"... See more...
Hi All, I have the below line of code to categorize transactions based on the response time (duration) taken in seconds. | eval ranges=case(Duration<=1,"less",Duration>1 and Duration<=3,"between",Duration>3,"greater") Say i trigger a load test with 100 transactions which are  all taking between 1 to 3 Secs but surprisingly few txns say 1 to 4 txns out of 100 are NOT getting categorized in the table though their duration column has a value between 1 to 3 Secs. Can someone please let me know what is going wrong.  
Hi friends, I am trying to piece together some splunk searches across application logs to try and piece together what 'normal' traffic patterns look like, vs DDoS attacking IP addresses. The end goal... See more...
Hi friends, I am trying to piece together some splunk searches across application logs to try and piece together what 'normal' traffic patterns look like, vs DDoS attacking IP addresses. The end goal is to answer the question: "For each IP that connects to our application, what is the average connection count within a 5m span, across a 2 hour period? What are the outlier ( greater than average) 5m span connection counts?  I have the following timechart which has been useful, but I'm sure there is a better way to do this.    index=myapplicationindex sourcetype=_json cluster=cluster23 | timechart span=5m count by x_forwarded_for where count > 75    
We're doing a review of several thousand alerts. About half of them have this syntax at the end of the initial search terms, where "MyAlertName" is literally the alert name:     NOT tag::host=M... See more...
We're doing a review of several thousand alerts. About half of them have this syntax at the end of the initial search terms, where "MyAlertName" is literally the alert name:     NOT tag::host=MyAlertName     What does it mean? It doesn't seem to make any difference if it's there or not, but the searches do work with it present, apparently it is syntactically correct. The docs I've found relating to double-colon syntax don't seem to describe anything like this, and "host" in our environment is always a server name.
Hi Splunkers, We are streaming google app logs to splunk in distributed environment. We have G suite for Splunk app on SH and Input add-on on Heavy forwarder. I am seeing log drop on a particular d... See more...
Hi Splunkers, We are streaming google app logs to splunk in distributed environment. We have G suite for Splunk app on SH and Input add-on on Heavy forwarder. I am seeing log drop on a particular day for about 2 hrs and then the logging has turned normal. Unable to identify the reason for the same. Also the g suite application health dashboard shows the below error, @alacercogitatus , could you please help me identify the cause for logs drop and how to fix these errors?
Hi, I have below string and I am trying to get StartTime, EndTime and Count to be displayed in the dashboard. "Non-Match - Window Event not matches with events Count with StartTime=2020-02-03T11:00... See more...
Hi, I have below string and I am trying to get StartTime, EndTime and Count to be displayed in the dashboard. "Non-Match - Window Event not matches with events Count with StartTime=2020-02-03T11:00:00.000Z EndTime=2020-02-03T11:00:00.000Z Count=100\"   I tried multiple rex formats but couldn't succeed. Can I get some help with this please?
Hello experts, How to round up the values either before decimal point or up to max two decimal point. Below is my search query: | mstats avg(_value) prestats=true WHERE metric_name="memory.us... See more...
Hello experts, How to round up the values either before decimal point or up to max two decimal point. Below is my search query: | mstats avg(_value) prestats=true WHERE metric_name="memory.used" AND "index"="*" AND ( "host"="fsx2098" OR "host"="fsx2099" OR "host"="fsx0102" OR "host"="fsx0319" OR "host"="fsxtp072" ) AND `sai_metrics_indexes` span=auto BY host | timechart avg(_value) useother=false BY host WHERE max in top20 | fields - _span* Below is Result of above: Desired Value: time                                                 host 1              host2              host3                host4 2022-03-29 13:20:00             26                       33                     34                     32 2022-03-29 13:21:00             27                       34                    34                    34 OR time                                                 host 1              host2              host3                host4 2022-03-29 13:20:00             26.80                33.96             34.25                 32.93 Any help will be much appreciated.  
I am creating a dashboard which contains a query that returns application health events of this type: Server Application Type Status   servername app... See more...
I am creating a dashboard which contains a query that returns application health events of this type: Server Application Type Status   servername appname App Health UP   servername appname Disk Health UP   servername appname LDAP Health UP   servername appname Redis Health  DOWN     What I want instead is for the table to look like: Server Application App Health Disk Health LDAP Health Redis Health servername appname UP UP UP DOWN    What would be the best way to accomplish this? Thank you for any suggestions.  
I currently have a UF that is sending data to two different Splunk environment.  [monitor:///data/folder1/] index=main sourcetype=applog1 _TCP_ROUTING = SplunkTEST crcSalt = <SOURCE> [monitor:///... See more...
I currently have a UF that is sending data to two different Splunk environment.  [monitor:///data/folder1/] index=main sourcetype=applog1 _TCP_ROUTING = SplunkTEST crcSalt = <SOURCE> [monitor:///data/folder2/] index=main sourcetype=applog2 _TCP_ROUTING = SplunkPROD crcSalt = <SOURCE>   When i run the following oneshot command it sends it to my SplunkPROD. How do i ensure it sends to SplunkTEST? Is there a setting for _TCP_ROUTING /opt/splunkforwarder/bin/splunk add oneshot /data/data/folder1/app1.log -index main -sourcetype "applog1"  
Hello,  I have 3 indexers. After one of them was restarted then Master Node crash and create crash log every minutes (when indexer try connect to cluster) Below crash log:   [build cd08487076... See more...
Hello,  I have 3 indexers. After one of them was restarted then Master Node crash and create crash log every minutes (when indexer try connect to cluster) Below crash log:   [build cd0848707637] 2022-03-29 17:48:34 Received fatal signal 6 (Aborted) on PID 3183981. Cause: Signal sent by PID 3183981 running under UID 1004. Crashing thread: CMAddPeerWorker-5 Registers: RIP: [0x00007FDB3792137F] gsignal + 271 (libc.so.6 + 0x3737F) RDI: [0x0000000000000002] RSI: [0x00007FDB121F9860] RBP: [0x00007FDB37A74698] RSP: [0x00007FDB121F9860] RAX: [0x0000000000000000] RBX: [0x0000000000000006] RCX: [0x00007FDB3792137F] RDX: [0x0000000000000000] R8: [0x0000000000000000] R9: [0x00007FDB121F9860] R10: [0x0000000000000008] R11: [0x0000000000000246] R12: [0x0000555F4AA9B818] R13: [0x0000555F4A93BC02] R14: [0x00000000000003C2] R15: [0x00007FDB16506238] EFL: [0x0000000000000246] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x002B000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x00007FDB3792137F] gsignal + 271 (libc.so.6 + 0x3737F) [0x00007FDB3790BDB5] abort + 295 (libc.so.6 + 0x21DB5) [0x00007FDB3790BC89] ? (libc.so.6 + 0x21C89) [0x00007FDB37919A76] ? (libc.so.6 + 0x2FA76) [0x0000555F497B294F] _ZN8CMBucket14setRASummariesERK4GuidRKSt3mapI3Str15CMBucketSummarySt4lessIS4_ESaISt4pairIKS4_S5_EEE + 623 (splunkd + 0x28C694F) [0x0000555F496C13C8] _ZN15CMAddPeerWorker15finishAddBucketERP8CMBucketR15BucketCSVStruct + 136 (splunkd + 0x27D53C8) [0x0000555F496C2320] _ZN15CMAddPeerWorker19addStandaloneBucketERK13IndexDataTypeR15BucketCSVStruct + 128 (splunkd + 0x27D6320) [0x0000555F496C24B3] _ZN15CMAddPeerWorker20processBucketBatchesEv + 291 (splunkd + 0x27D64B3) [0x0000555F48757588] _ZN15CMAddPeerWorker4mainEv + 552 (splunkd + 0x186B588) [0x0000555F4959B917] _ZN6Thread8callMainEPv + 135 (splunkd + 0x26AF917) [0x00007FDB37CB717A] ? (libpthread.so.0 + 0x817A) [0x00007FDB379E6DC3] clone + 67 (libc.so.6 + 0xFCDC3) Linux / splunk-master-prod-01.local.ad / 4.18.0-240.1.1.el8_3.x86_64 / #1 SMP Fri Oct 16 13:36:46 EDT 2020 / x86_64 Libc abort message: splunkd: /opt/splunk/src/clustering/CMBucket.cpp:962: void CMBucket::setRASummaries(const Guid&, const CMBucketSummaries&): Assertion `hasPeer(peer)' failed. /etc/redhat-release: Red Hat Enterprise Linux release 8.5 (Ootpa) glibc version: 2.28 glibc release: stable Last errno: 0 Threads running: 103 Runtime: 56.398836s argv: [splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd] Regex JIT enabled RE2 regex engine enabled using CLOCK_MONOTONIC Thread: "CMAddPeerWorker-5", did_join=0, ready_to_run=Y, main_thread=N, token=140578878629632 MutexByte: MutexByte-waiting={none} x86 CPUID registers: 0: 0000000D 756E6547 6C65746E 49656E69 1: 000306F0 07040800 FFFA3203 1F8BFBFF 2: 76036301 00F0B5FF 00000000 00C30000 3: 00000000 00000000 00000000 00000000 4: 00000000 00000000 00000000 00000000 5: 00000000 00000000 00000000 00000000 6: 00000004 00000000 00000000 00000000 7: 00000000 00000000 00000000 00000000 8: 00000000 00000000 00000000 00000000 9: 00000000 00000000 00000000 00000000 A: 07300401 000000FF 00000000 00000000 B: 00000000 00000000 00000047 00000007 C: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 80000000: 80000008 00000000 00000000 00000000 80000001: 00000000 00000000 00000021 2C100800 80000002: 65746E49 2952286C 6F655820 2952286E 80000003: 55504320 2D354520 30383632 20347620 80000004: 2E322040 48473034 0000007A 00000000 80000005: 00000000 00000000 00000000 00000000 80000006: 00000000 00000000 01006040 00000000 80000007: 00000000 00000000 00000000 00000100 80000008: 0000302B 00000000 00000000 00000000 terminating...   And indexer-1 (that one that was rebooted) cannot join to cluster.  Has anyone had this problem and how to deal with it? If more info needed im able to send it.
Hello I am trying to isolate 'msg' field with multiple quotes and when I use rex is either cannot grab what I need or it continues through the data and doesn't stop, thanks! outcome="Success"msg="T... See more...
Hello I am trying to isolate 'msg' field with multiple quotes and when I use rex is either cannot grab what I need or it continues through the data and doesn't stop, thanks! outcome="Success"msg="The "Account is trusted for delegation" property was modified from No to Yes"cs3=" I have tried | rex field=_raw "msg=\"(?<msg>[^\"]+)" with no success.
I have added URL using data inputs in website monitoring but url not monitored or not showing in status overview page  CURRENT APPLICATION Website Monitoring Version: 2.9.1 Build: 1579823072
So here is the issue. We have a distr. environment with a DBConnect 3.5.1 running on a HF. DBinputs and DBoutputs are seeing heavy use and are working,  I used dblookup as well for a while( about... See more...
So here is the issue. We have a distr. environment with a DBConnect 3.5.1 running on a HF. DBinputs and DBoutputs are seeing heavy use and are working,  I used dblookup as well for a while( about a month ago) and it worked just fine. Today though neither the old  dbxlookups, nor any of my new ones work. They return empty columns(column that should have been filled is there but no values are present). Here is a test example : | makeresults count=10 | streamstats count as id | dbxlookup connection="myconnection" query="SELECT * FROM `my_db`.`tbl_id_to_name`" "id" AS "id" OUTPUT "name" AS "name" I have an old environment which i use for tests with older DBC, and the same queries work over there(and as said, they worked a few weeks ago).   I have triple-quadruple checked if the tables used for lookups have data inside, and yes they do......I am baffled, no idea what is going on. any suggestions?    
Start Up issue  Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, Error after upgrade to last version tha... See more...
Start Up issue  Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, Error after upgrade to last version thanks Maurizio
Hey guys, I`m trying to create a search that should map a session from an internal application to the corresponding VPN session. Main search - fields: IP_ADDRESS, USER_AD, _time - internal applic... See more...
Hey guys, I`m trying to create a search that should map a session from an internal application to the corresponding VPN session. Main search - fields: IP_ADDRESS, USER_AD, _time - internal application login sessions. Sub search - fields: Framed_IP_Address, User_Name, _time - VPN allocating internal IP. My goal is to check whether users are using their AD account to log into application or not. The problem right now is that field USER_AD is not displayed in the table and I was wondering why it is happening and how could I remediate that. index=tkrsec sourcetype="cisco:acs" Acct_Status_Type=Interim-Update earliest=-8h latest=-1m [ search index=tkrsec host=Hercules_fusion | rename IP_ADDRESS as Framed_IP_Address | table Framed_IP_Address ] | eval time1=strftime(_time, "%m/%d/%y %I:%M:%S:%p") | table User_Name,Acct_Status_Type,Framed_IP_Address,time1 | join type=outer USER_AD [ search index=tkrsec host=Hercules_fusion | eval time2=strftime(_time, "%m/%d/%y %I:%M:%S:%p") | table time2,USER_AD ] | table User_Name,Acct_Status_Type,Framed_IP_Address,time1, USER_AD, time2  
Hi, can i have the wget command or link to install splunk 6.6.0 and splunk forwarder 6.6.0? Windows and Linux version please. Thanks to all.
Hey guys, I`m trying to create a search that should map a session from an internal application to the corresponding VPN session. Main search - fields: IP_ADDRESS, USER_AD, _time - internal applic... See more...
Hey guys, I`m trying to create a search that should map a session from an internal application to the corresponding VPN session. Main search - fields: IP_ADDRESS, USER_AD, _time - internal application login sessions Sub search - fields: Framed_IP_Address, User_Name, _time - VPN allocating internal IP. Basically my approach was to join left the VPN search, to main search (internal application login sessions) by Internal IP, but the main problem is that when the results table is displayed, it will map the first VPN session that is found with the specified IP_Address from the join, and I need to map the latest IP allocation. Example: IP 10.0.0.1 was allocated to user x  at 10:00. - user x did not attempt to log into internal app. IP 10.0.0.1 was allocated to user y0 at 10:40. IP 10.0.0.1 made a login session for user y1 at 11:00. My table of results will display: user x, user y1, 10.0.0.1, 10.0.0.1, 11:00, 10:00 Instead of : user y0, user y1, 10.0.0.1, 10.0.0.1, 11:00, 10:40   I understand from join command documentation that  "join left=L right=R usetime=true earlier=true where L.IP_ADDRESS=R.Framed_IP_Address" shall look for the IP in the internal app login session, and it will map it with the first event that has that IP in the VPN allocation search, prior to the internal application session.   Could you please help me to get the latest VPN session for the IP that is matched in the internal application login session instead of the earliest(as it is by default in join command)?   index=x host=internal_application | eval time2=strftime(_time, "%m/%d/%y %I:%M:%S:%p") | join left=L right=R usetime=true earlier=true where L.IP_ADDRESS=R.Framed_IP_Address [search index=x sourcetype="cisco:acs" Acct_Status_Type=Interim-Update earliest=-12h latest=-1m | eval time1=strftime(_time, "%m/%d/%y %I:%M:%S:%p")] | table R.User_Name, L.USER_AD, R.Framed_IP_Address, L.IP_ADDRESS, L.time2, R.time1 | rename R.User_Name as VPN_User, L.USER_AD as Hercules_user, R.Framed_IP_Address as "IP assigned by VPN", L.IP_ADDRESS as "IP Hercules", L.time2 as "User connecting at", R.time1 as "IP allocation time" | eval Hercules_user=lower(Hercules_user) | where Hercules_user!=VPN_User | table VPN_User, Hercules_user, "IP assigned by VPN", "IP Hercules", "User connecting at", "IP allocation time"
When we are doing searches on Splunk we are encountering a strange issue. For example, when I add sc4s_fromhostip=... to the search I can't see all the events, sometimes I can't see any results. Norm... See more...
When we are doing searches on Splunk we are encountering a strange issue. For example, when I add sc4s_fromhostip=... to the search I can't see all the events, sometimes I can't see any results. Normally there are events. When I check with stats (... | stats count by sc4s_fromhostip) I can see the number of events. When I put a wildcard * to the end (sc4s_fromhostip=...*) then the number of events increases but still it doesn't show all of them. If I do an eval and make a copy of the sc4s_fromhostip field it works properly and I can see all the results. Like ... | eval a=sc4s_fromhostip | a=… * This happens on all the search heads in the cluster and outside the cluster. * If I change the user it still continues. Did anyone encounter a similar issue before?
Hello experts, I Just want my field `snow_os_version`  to be up to 2 decimal points like the first entry should only be  `3.10`, How to achieve that.