All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I have 3 indexers. After one of them was restarted then Master Node crash and create crash log every minutes (when indexer try connect to cluster) Below crash log:   [build cd08487076... See more...
Hello,  I have 3 indexers. After one of them was restarted then Master Node crash and create crash log every minutes (when indexer try connect to cluster) Below crash log:   [build cd0848707637] 2022-03-29 17:48:34 Received fatal signal 6 (Aborted) on PID 3183981. Cause: Signal sent by PID 3183981 running under UID 1004. Crashing thread: CMAddPeerWorker-5 Registers: RIP: [0x00007FDB3792137F] gsignal + 271 (libc.so.6 + 0x3737F) RDI: [0x0000000000000002] RSI: [0x00007FDB121F9860] RBP: [0x00007FDB37A74698] RSP: [0x00007FDB121F9860] RAX: [0x0000000000000000] RBX: [0x0000000000000006] RCX: [0x00007FDB3792137F] RDX: [0x0000000000000000] R8: [0x0000000000000000] R9: [0x00007FDB121F9860] R10: [0x0000000000000008] R11: [0x0000000000000246] R12: [0x0000555F4AA9B818] R13: [0x0000555F4A93BC02] R14: [0x00000000000003C2] R15: [0x00007FDB16506238] EFL: [0x0000000000000246] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x002B000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x00007FDB3792137F] gsignal + 271 (libc.so.6 + 0x3737F) [0x00007FDB3790BDB5] abort + 295 (libc.so.6 + 0x21DB5) [0x00007FDB3790BC89] ? (libc.so.6 + 0x21C89) [0x00007FDB37919A76] ? (libc.so.6 + 0x2FA76) [0x0000555F497B294F] _ZN8CMBucket14setRASummariesERK4GuidRKSt3mapI3Str15CMBucketSummarySt4lessIS4_ESaISt4pairIKS4_S5_EEE + 623 (splunkd + 0x28C694F) [0x0000555F496C13C8] _ZN15CMAddPeerWorker15finishAddBucketERP8CMBucketR15BucketCSVStruct + 136 (splunkd + 0x27D53C8) [0x0000555F496C2320] _ZN15CMAddPeerWorker19addStandaloneBucketERK13IndexDataTypeR15BucketCSVStruct + 128 (splunkd + 0x27D6320) [0x0000555F496C24B3] _ZN15CMAddPeerWorker20processBucketBatchesEv + 291 (splunkd + 0x27D64B3) [0x0000555F48757588] _ZN15CMAddPeerWorker4mainEv + 552 (splunkd + 0x186B588) [0x0000555F4959B917] _ZN6Thread8callMainEPv + 135 (splunkd + 0x26AF917) [0x00007FDB37CB717A] ? (libpthread.so.0 + 0x817A) [0x00007FDB379E6DC3] clone + 67 (libc.so.6 + 0xFCDC3) Linux / splunk-master-prod-01.local.ad / 4.18.0-240.1.1.el8_3.x86_64 / #1 SMP Fri Oct 16 13:36:46 EDT 2020 / x86_64 Libc abort message: splunkd: /opt/splunk/src/clustering/CMBucket.cpp:962: void CMBucket::setRASummaries(const Guid&, const CMBucketSummaries&): Assertion `hasPeer(peer)' failed. /etc/redhat-release: Red Hat Enterprise Linux release 8.5 (Ootpa) glibc version: 2.28 glibc release: stable Last errno: 0 Threads running: 103 Runtime: 56.398836s argv: [splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd] Regex JIT enabled RE2 regex engine enabled using CLOCK_MONOTONIC Thread: "CMAddPeerWorker-5", did_join=0, ready_to_run=Y, main_thread=N, token=140578878629632 MutexByte: MutexByte-waiting={none} x86 CPUID registers: 0: 0000000D 756E6547 6C65746E 49656E69 1: 000306F0 07040800 FFFA3203 1F8BFBFF 2: 76036301 00F0B5FF 00000000 00C30000 3: 00000000 00000000 00000000 00000000 4: 00000000 00000000 00000000 00000000 5: 00000000 00000000 00000000 00000000 6: 00000004 00000000 00000000 00000000 7: 00000000 00000000 00000000 00000000 8: 00000000 00000000 00000000 00000000 9: 00000000 00000000 00000000 00000000 A: 07300401 000000FF 00000000 00000000 B: 00000000 00000000 00000047 00000007 C: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 80000000: 80000008 00000000 00000000 00000000 80000001: 00000000 00000000 00000021 2C100800 80000002: 65746E49 2952286C 6F655820 2952286E 80000003: 55504320 2D354520 30383632 20347620 80000004: 2E322040 48473034 0000007A 00000000 80000005: 00000000 00000000 00000000 00000000 80000006: 00000000 00000000 01006040 00000000 80000007: 00000000 00000000 00000000 00000100 80000008: 0000302B 00000000 00000000 00000000 terminating...   And indexer-1 (that one that was rebooted) cannot join to cluster.  Has anyone had this problem and how to deal with it? If more info needed im able to send it.
Hello I am trying to isolate 'msg' field with multiple quotes and when I use rex is either cannot grab what I need or it continues through the data and doesn't stop, thanks! outcome="Success"msg="T... See more...
Hello I am trying to isolate 'msg' field with multiple quotes and when I use rex is either cannot grab what I need or it continues through the data and doesn't stop, thanks! outcome="Success"msg="The "Account is trusted for delegation" property was modified from No to Yes"cs3=" I have tried | rex field=_raw "msg=\"(?<msg>[^\"]+)" with no success.
I have added URL using data inputs in website monitoring but url not monitored or not showing in status overview page  CURRENT APPLICATION Website Monitoring Version: 2.9.1 Build: 1579823072
So here is the issue. We have a distr. environment with a DBConnect 3.5.1 running on a HF. DBinputs and DBoutputs are seeing heavy use and are working,  I used dblookup as well for a while( about... See more...
So here is the issue. We have a distr. environment with a DBConnect 3.5.1 running on a HF. DBinputs and DBoutputs are seeing heavy use and are working,  I used dblookup as well for a while( about a month ago) and it worked just fine. Today though neither the old  dbxlookups, nor any of my new ones work. They return empty columns(column that should have been filled is there but no values are present). Here is a test example : | makeresults count=10 | streamstats count as id | dbxlookup connection="myconnection" query="SELECT * FROM `my_db`.`tbl_id_to_name`" "id" AS "id" OUTPUT "name" AS "name" I have an old environment which i use for tests with older DBC, and the same queries work over there(and as said, they worked a few weeks ago).   I have triple-quadruple checked if the tables used for lookups have data inside, and yes they do......I am baffled, no idea what is going on. any suggestions?    
Start Up issue  Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, Error after upgrade to last version tha... See more...
Start Up issue  Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, Error after upgrade to last version thanks Maurizio
Hey guys, I`m trying to create a search that should map a session from an internal application to the corresponding VPN session. Main search - fields: IP_ADDRESS, USER_AD, _time - internal applic... See more...
Hey guys, I`m trying to create a search that should map a session from an internal application to the corresponding VPN session. Main search - fields: IP_ADDRESS, USER_AD, _time - internal application login sessions. Sub search - fields: Framed_IP_Address, User_Name, _time - VPN allocating internal IP. My goal is to check whether users are using their AD account to log into application or not. The problem right now is that field USER_AD is not displayed in the table and I was wondering why it is happening and how could I remediate that. index=tkrsec sourcetype="cisco:acs" Acct_Status_Type=Interim-Update earliest=-8h latest=-1m [ search index=tkrsec host=Hercules_fusion | rename IP_ADDRESS as Framed_IP_Address | table Framed_IP_Address ] | eval time1=strftime(_time, "%m/%d/%y %I:%M:%S:%p") | table User_Name,Acct_Status_Type,Framed_IP_Address,time1 | join type=outer USER_AD [ search index=tkrsec host=Hercules_fusion | eval time2=strftime(_time, "%m/%d/%y %I:%M:%S:%p") | table time2,USER_AD ] | table User_Name,Acct_Status_Type,Framed_IP_Address,time1, USER_AD, time2  
Hi, can i have the wget command or link to install splunk 6.6.0 and splunk forwarder 6.6.0? Windows and Linux version please. Thanks to all.
Hey guys, I`m trying to create a search that should map a session from an internal application to the corresponding VPN session. Main search - fields: IP_ADDRESS, USER_AD, _time - internal applic... See more...
Hey guys, I`m trying to create a search that should map a session from an internal application to the corresponding VPN session. Main search - fields: IP_ADDRESS, USER_AD, _time - internal application login sessions Sub search - fields: Framed_IP_Address, User_Name, _time - VPN allocating internal IP. Basically my approach was to join left the VPN search, to main search (internal application login sessions) by Internal IP, but the main problem is that when the results table is displayed, it will map the first VPN session that is found with the specified IP_Address from the join, and I need to map the latest IP allocation. Example: IP 10.0.0.1 was allocated to user x  at 10:00. - user x did not attempt to log into internal app. IP 10.0.0.1 was allocated to user y0 at 10:40. IP 10.0.0.1 made a login session for user y1 at 11:00. My table of results will display: user x, user y1, 10.0.0.1, 10.0.0.1, 11:00, 10:00 Instead of : user y0, user y1, 10.0.0.1, 10.0.0.1, 11:00, 10:40   I understand from join command documentation that  "join left=L right=R usetime=true earlier=true where L.IP_ADDRESS=R.Framed_IP_Address" shall look for the IP in the internal app login session, and it will map it with the first event that has that IP in the VPN allocation search, prior to the internal application session.   Could you please help me to get the latest VPN session for the IP that is matched in the internal application login session instead of the earliest(as it is by default in join command)?   index=x host=internal_application | eval time2=strftime(_time, "%m/%d/%y %I:%M:%S:%p") | join left=L right=R usetime=true earlier=true where L.IP_ADDRESS=R.Framed_IP_Address [search index=x sourcetype="cisco:acs" Acct_Status_Type=Interim-Update earliest=-12h latest=-1m | eval time1=strftime(_time, "%m/%d/%y %I:%M:%S:%p")] | table R.User_Name, L.USER_AD, R.Framed_IP_Address, L.IP_ADDRESS, L.time2, R.time1 | rename R.User_Name as VPN_User, L.USER_AD as Hercules_user, R.Framed_IP_Address as "IP assigned by VPN", L.IP_ADDRESS as "IP Hercules", L.time2 as "User connecting at", R.time1 as "IP allocation time" | eval Hercules_user=lower(Hercules_user) | where Hercules_user!=VPN_User | table VPN_User, Hercules_user, "IP assigned by VPN", "IP Hercules", "User connecting at", "IP allocation time"
When we are doing searches on Splunk we are encountering a strange issue. For example, when I add sc4s_fromhostip=... to the search I can't see all the events, sometimes I can't see any results. Norm... See more...
When we are doing searches on Splunk we are encountering a strange issue. For example, when I add sc4s_fromhostip=... to the search I can't see all the events, sometimes I can't see any results. Normally there are events. When I check with stats (... | stats count by sc4s_fromhostip) I can see the number of events. When I put a wildcard * to the end (sc4s_fromhostip=...*) then the number of events increases but still it doesn't show all of them. If I do an eval and make a copy of the sc4s_fromhostip field it works properly and I can see all the results. Like ... | eval a=sc4s_fromhostip | a=… * This happens on all the search heads in the cluster and outside the cluster. * If I change the user it still continues. Did anyone encounter a similar issue before?
Hello experts, I Just want my field `snow_os_version`  to be up to 2 decimal points like the first entry should only be  `3.10`, How to achieve that.  
I am new to splunk and i cannot figure out how to check the Values and evaluate True/False. Below is the query that i tried.   index=windows host=testhost1 EventID IN ("4688","4103","4104","4768... See more...
I am new to splunk and i cannot figure out how to check the Values and evaluate True/False. Below is the query that i tried.   index=windows host=testhost1 EventID IN ("4688","4103","4104","4768","4769") | eval ev_field = if('EventID' IN ("4688","4103","4104","4768","4769"), "True","False") | dedup host,EventID,ev_field | table host,EventID,ev_field   Requirement is to check if the ex: "testhost1" has particular values in the EventID field and if not to mark as "false" or something like that... The query that i made evaluates and adds TRUE to the "ev_field" for the EventIDs that it finds but i can't figure out how to add False for the EventIDs that it does not match or find in logs. The EventIDs that are not present in logs there simply wont show in results. This is the result that i get: This is what i actually need: The second image is edited just as an example to make my point what i need as a result.
Hi. I ran into a major problem, and to which I am unable to apply a real fix. I have tried all versions of Forwarders (Linux x64), from 7.x.x to 8.x.x, the problem is always the same. I have a ... See more...
Hi. I ran into a major problem, and to which I am unable to apply a real fix. I have tried all versions of Forwarders (Linux x64), from 7.x.x to 8.x.x, the problem is always the same. I have a directory with millions of small files, about 150.000 are being generated per day, continuously generating. To manage this path, the Forwarder starts to occupy several GB of RAM, instead of the few standard MB. I tested an ignoreOlderThan = 1h (also 5m) in inputs but the result does not change, the GB are always several (about 20% of the System RAM) Is there a method to avoid this excessive consumption? Thank you.
I'm using Splunk Enterprise 8.2.5 on Windows (both indexers and Forwarders). I have modified inputs.conf on the indexer as follows to referebce my PJI signed certificate/key pair: [splunktcp-ssl:99... See more...
I'm using Splunk Enterprise 8.2.5 on Windows (both indexers and Forwarders). I have modified inputs.conf on the indexer as follows to referebce my PJI signed certificate/key pair: [splunktcp-ssl:9998] disabled = 0 [SSL] serverCert = C:\Program Files\Splunk\etc\auth\mycert\my.pem sslPassword = mypassword requireClientCert = false sslVersions = *,-ssl2,-ssl3,-tls1.0,-tls1.1 After service restart I see port 9998 listening on the indexer. I added the following config to the outputs.conf of my forwarder: [tcpout:production] server = myindexerfqdn:9998 useSSL = true No data is getting forwarded though and the following is raised in splunkd.log at the forwarder: 03-29-2022 13:01:11.229 +0100 ERROR SSLCommon [37916 parsing] - Can't read certificate file errno=33558528 error:02001000:system library:fopen:system library 03-29-2022 13:01:11.229 +0100 ERROR TcpOutputProc [37916 parsing] - Error initializing SSL context - check splunkd.log regarding configuration error for server myindexerfqdn:9998 What is the windows forwarder looking for? I set the indexer not to verify client certs but does the forwarder need a client certificate (self-signed or otherwise) generated regardless to use SSL ?
Hi All, Can someone please explain me the below architecture for Syslog.
i have a table in dashboard generated from stats  and a time picker in my dashboard. Based on value selected in timepicker the dashboard table value changes.The column values are not links. the first... See more...
i have a table in dashboard generated from stats  and a time picker in my dashboard. Based on value selected in timepicker the dashboard table value changes.The column values are not links. the first column has error message values which will be unique .if i click on a value in the 1st column then the events having those message (field.msg) should be displayed in another window. this is applicable for all values in that first column   Thanks for ur time and ur intention to help...
Hi All, Any idea on how to generate an alert when the password does not contain any special characters? Like when ever in my data the password is of only in plain text format, I should generate an ... See more...
Hi All, Any idea on how to generate an alert when the password does not contain any special characters? Like when ever in my data the password is of only in plain text format, I should generate an alert. I'm a newbie,Please help out with the query. Thanks in advance.
Hi i am new to splunk. i am creating splunk dashboard.i have the interesting fields like field1.field2.x.stacktrace{} ,field1.field2.x.x.stacktrace{}, field1.field2.x.x.x.stacktrace{} ,fieldN.msg , f... See more...
Hi i am new to splunk. i am creating splunk dashboard.i have the interesting fields like field1.field2.x.stacktrace{} ,field1.field2.x.x.stacktrace{}, field1.field2.x.x.x.stacktrace{} ,fieldN.msg , field.time i am counting based on fieldN.msg  and displaying latest(field.time) ,count(fieldN.msg) for each group using stats( stats count(fieldN.msg) , latest(field.time) by fieldN.msg) some events has values in field1.field2.x.stacktrace{}  or field1.field2.x.x.stacktrace{} or field1.field2.x.x.x.stacktrace{} . for some events those fields are not even available.  for some events it may be available in field1.field2.x.stacktrace{} and field1.field2.x.x.stacktrace{}  fields as well How can i get the latest stacktrace of each group as another field in stats table if the stacktrace is available in any level or if its not available in any event of the group then "NA" has to be displayed  
Hi Team,    I have two reports where one report(report1)has timestamp field where other report(report2) doesn't have it has only the date in the source filename. Report 2 will not be send it to spl... See more...
Hi Team,    I have two reports where one report(report1)has timestamp field where other report(report2) doesn't have it has only the date in the source filename. Report 2 will not be send it to splunk in realtime. Now i would like to combine those two reports and populate the data. I have extracted the date from source.(search time)  from report1 & report2 one common field is there.. I have already ingested the file how do i compare the date and populate the data. Can anyone suggest "where clause" can be used in this case?
Hello Splunk commu! I am using Indexers as Virtual Machine in VMWare, and I would like to increase the size of the drive where my logs are indexed.  I am using lvm and I would like to know if t... See more...
Hello Splunk commu! I am using Indexers as Virtual Machine in VMWare, and I would like to increase the size of the drive where my logs are indexed.  I am using lvm and I would like to know if there are some best practices to follow before extending the drive size on VMware side. Do I need to stop Splunk on the indexers before increasing the drive size ? What are the potential risks during this operation ?  Thanks a lot for your help! 
Good morning fellow Splunkthiasts! I am trying to build some dashboard using Splunk REST, unfortunately I can not get the data from certain endpoints when using | rest SPL command, while CURL appro... See more...
Good morning fellow Splunkthiasts! I am trying to build some dashboard using Splunk REST, unfortunately I can not get the data from certain endpoints when using | rest SPL command, while CURL approach returns what is expected. To be specific, I want to read /services/search/jobs/<SID>/summary endpoint. Following SPL returns 0 results:       | rest /services/search/jobs/1648543133.8/summary       When called externally, the endpoint works as expected:       [2022-03-29 10:46:25] root@splunk1.lab2.local:~# curl -k -u admin:pass https://localhost:8089/services/search/jobs/1648543133.8/summary --get | head % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15578 100 15578 0 0 661k 0 --:--:-- --:--:-- --:--:-- 661k <?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder> <field>_bkt</field> <field>_cd</field> <field>_eventtype_color</field> <field>_indextime</field> <field>_kv</field> <field>_raw</field>        The same happens with /services/search/jobs/<SID>/results and /services/search/jobs/<SID>/events. When I call /services/search/jobs/ or /services/search/jobs/<SID>, data is returned by both SPL and CURL. I tried this on several Splunk instances with versions ranging from 8.2.3 back to 7.3.3, always using account with admin role - the behavior is always exactly the same. Any hints what I might be missing?