All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thanks for the reply  the search finds results  accurately  but when created alert with send to mobile , the alert will never be triggered despite when running the search query i get results BTW t... See more...
thanks for the reply  the search finds results  accurately  but when created alert with send to mobile , the alert will never be triggered despite when running the search query i get results BTW the alert is configured on schedule not real-time
Hi Everyone, Is there someone who knows how to export Splunk ITSI Entities to a CSV file including their aliases, fields and services? Thanks!
@GaetanVP  I tried below query not getting any result index="abcsourcetype ="600000304_gg_abs_ipc2" source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Reading Control-File /absin/C... See more...
@GaetanVP  I tried below query not getting any result index="abcsourcetype ="600000304_gg_abs_ipc2" source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Reading Control-File /absin/CARS.HIERCTR." | rex "(?<starttime>.*?)\s\[.*\s\].*[\r\n]+(?<endtime>.*?)\s\[.*\s\].*"| table starttime endtime also for end process logs are these: 2023-08-30 04:09:30.458 [INFO ] [Thread-40] FileEventCreator - Completed Settlement file processing, CARS.HIER.D082923.T002302 records processed: 161076 @GaetanVP please guide
Hi there,  im pretty new in Splunk, so sorry if it is easy task. I have following example events in my index -  It is a export from the Zabbix monitoring   8/31/23 4:39:31.000 PM { [-] descri... See more...
Hi there,  im pretty new in Splunk, so sorry if it is easy task. I have following example events in my index -  It is a export from the Zabbix monitoring   8/31/23 4:39:31.000 PM { [-] description: mem Heap Memory used groups: [ [+] ] hostname: WMS_Name itemid: 186985 ns: 941726183 tags.application: Memory type: 3 value: 1199488000 } Show as raw text description = mem Heap Memory usedhost = WMS_NAME1 hostname = WMS_NAME1 source = http:its_wms_zabbixvalue = 1199488000 8/31/23 4:39:31.000 PM { [-] description: mem Heap Memory max groups: [ [+] ] hostname: WMS_NAME1 itemid: 186984 ns: 883128205 tags.application: Memory type: 3 value: 8589934592 } Show as raw text description = mem Heap Memory maxhost = WMS_NAME1 hostname = WMS_NAME1 source = http:its_wms_zabbixvalue = 8589934592   Search query:   index="some_index" sourcetype="zabbix:history" hostname="WMS_NAME1" description="mem Heap Memory used" OR description="mem Heap Memory max"| spath "groups{}" | search "groups{}"="Instances/Tests*" | eval ValueMB=value/1024/1024| table _time, hostname, ValueMB     In this case, there are two events - one for java heap memory usage and one for java heap max memory.  Is there any way, how to rename values variable  based on the description in a event and join them in one table under the same time? Or maybe join both events in one? The main goal is to display both values in one graph and be able to monitor long term usage.  I found a way with using multisearch, but it takes too much time in processing and i believe there will be a more simple way.  Thank you in advance for any hint    
Thanks for your response. We tried to distribute this configuration on our indexes, but it didn't work. We saw coming data on the external system, but Splunk became not searchable and replication f... See more...
Thanks for your response. We tried to distribute this configuration on our indexes, but it didn't work. We saw coming data on the external system, but Splunk became not searchable and replication factor was not met. Do you see something wrong with this one? [indexAndForward] index=true selectiveIndexing=false [tcpout] defaultGroup=external_system forwardedindex.3.blacklist = (_internal|_audit|_telemetry|_introspection) [external_system] indexAndForward = true [tcpout:external_system] disabled=false sendCookedData=false server=<external_system>:<external_port>  
Hello @aditsss, If you are receiving those two lines in the same event you could try to use something like this : index="abc"sourcetype ="600000304_gg_abs_ipc2" source="/amex/app/gfp-settlement-raw... See more...
Hello @aditsss, If you are receiving those two lines in the same event you could try to use something like this : index="abc"sourcetype ="600000304_gg_abs_ipc2" source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Reading Control-File /absin/CARS.HIERCTR." | rex "(?<starttime>.*?)\s\[.*\s\].*[\r\n]+(?<endtime>.*?)\s\[.*\s\].*" This gave me the following results : Hope it helps ! GaetanVP  
Were we able to solve the problem?? If yes please share the resolution.
My bad. I didn't look to see what database you were using. You may need quotes around it = "abc:def". Since you're doing this inside a quoted string, you may need to escape them as \" in the string. 
Thanks!!
@gcusello , Thanks for the headsup.. as said, I modified the regex..   | rex "fieldb=(?P<fieldb>\w*[\-|\_]\w*)\," | rex "fielda\:\s+(?P<fielda_X>\w*\-\w*)\$" and used the where condition to find ma... See more...
@gcusello , Thanks for the headsup.. as said, I modified the regex..   | rex "fieldb=(?P<fieldb>\w*[\-|\_]\w*)\," | rex "fielda\:\s+(?P<fielda_X>\w*\-\w*)\$" and used the where condition to find matches | where 'fielda_X'='fieldb'    Its working now as expected..
index = my_index | top limit=10 field_1 | appendcols [search index = my_index | top limit=10 field_2] | appendcols [search index = my_index | top limit=10 field_3] | table field_1 field_2 field_3
@ITWhisperer yes..
Try something like this index=proxy sourcetype="proxy logs" user="*" NOT [| inputlookup lookup.csv | eventstats values(host) as host | mvexpand host | format ] | stats c by username, host
In the olden days, I would have said computers are dumb, they can only do what you tell them to do, but with advances in AI this is becoming less true. Having said that, Splunk still requires you to ... See more...
In the olden days, I would have said computers are dumb, they can only do what you tell them to do, but with advances in AI this is becoming less true. Having said that, Splunk still requires you to tell it what to do and it can automate what you are doing. So, how would you as a human determine how these events are related?
Hello, I'm new to Splunk and despite searching extensively on this community site, I was not able to find a solution for what I thought was a rather simple problem. I would like to list, for each ... See more...
Hello, I'm new to Splunk and despite searching extensively on this community site, I was not able to find a solution for what I thought was a rather simple problem. I would like to list, for each field in my index, the list of top 10 values. I've tried different commande with stats values and top, and the following one gives me what's closest, but the output is messy:     index = my_index | multireport [top limit=10 field_1] [top limit=10 field_2] [top limit=10 field_3]     I do get the top values of each field presented in different columns of the output, but also get many empty cells: field_1 field_2 field_3   a top value of field_2     a top value of field_2     a top value of field_2       a top value of field_3     a top value of field_3 a top value of field_1     a top value of field_1       while i would like something like that: field_1 field_2 field_3 a top value of field_1 a top value of field_2 a top value of field_3 a top value of field_1 a top value of field_2 a top value of field_3   a top value of field_2     Has someone any idea how I could cleanup the output, and, ideally, easily loop through the column names so I don't have to write their name manually. Thank!
Hi, I'm in the middle of testing deployment of the UF for a new setup and I started with 9.0.1, deploying it with ansible from a local yum repository as the initial push. (that' s the gist of it, bit... See more...
Hi, I'm in the middle of testing deployment of the UF for a new setup and I started with 9.0.1, deploying it with ansible from a local yum repository as the initial push. (that' s the gist of it, bit more complex infrastructure behind it but not really relevant) But now 9.1.1 came out which was pointed out to me due to a security alert so I updated the package on our repository, hit 'yum update'  on one of my test servers, and this broke the UF. Apparently it needs to be started manually once with '--accept-license --answer-yes --no-prompt'  to complete the upgrade and accept the license .. again .. ? Is there a clever way of dealing with this so it just works after upgrading the rpm ? Short of modifying the rpm's spec file so it does some starting and stopping while the rpm is being upgraded. Manually doing this in case there happens to be an update is just not an option due to the number of hosts, our regular updates run unattended with basically just a 'yum/dnf update -y' Modifying the systemd file so it just starts with the required parameters does not appear be working with the '_internal_launch_under_systemd' , replacing that with the old 'start etc' makes the UF not work with systemd anymore. RHEL9 is going to forego the init.d folder I think so using older more flexible sysV scripts is not an option either. Any sort of manual intervention when there happens to be a new version is highly undesirable.
Thank you very much for the solution. Allowing old jQuery didn't help with us either, but the variant using _i18n_catalog["+-<string>"] works.   Should you still find a solution with splunk.i... See more...
Thank you very much for the solution. Allowing old jQuery didn't help with us either, but the variant using _i18n_catalog["+-<string>"] works.   Should you still find a solution with splunk.i18n, I would also be very interested in it, as we have this built into a lot of dashboards
Hello all and @gcusello,  Just for information I contacted Splunk support for that, here are some information :  1/ Indeed there are no official documentation about that command, apparently for sec... See more...
Hello all and @gcusello,  Just for information I contacted Splunk support for that, here are some information :  1/ Indeed there are no official documentation about that command, apparently for security reason... Let's say it is security through obscurity (and I am not a fan of that concept).  2/  As assumed, it is impossible to revert a $6 value since it has been hashed by a SHA-512 algorithm *just like UNIX based /etc/shadow file). But you can revert $7 value if you have the correct splunk.secret value. 3/ Yes Thanks, GaetanVP
Hi Team, How can I fetch the start and end time from below logs: 2023-08-30 00:29:00.018 [INFO ] [pool-3-thread-1] ReadControlFileImpl - Reading Control-File /absin/CARS.HIERCTR.D082923.T002302 20... See more...
Hi Team, How can I fetch the start and end time from below logs: 2023-08-30 00:29:00.018 [INFO ] [pool-3-thread-1] ReadControlFileImpl - Reading Control-File /absin/CARS.HIERCTR.D082923.T002302 2023-08-30 07:43:29.020 [INFO ] [Thread-18] FileEventCreator - Completed Settlement file processing, TRIM.UNB.D082923.T045920 records processed: 13283520 I want this start time and end time can someone help me with query my current query: index="abc"sourcetype ="600000304_gg_abs_ipc2" source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Reading Control-File /absin/CARS.HIERCTR."    
Hello, I put brackets around the field name [abc:def], but it still didn't work and got the following error org.postgresql.util.PSQLException: ERROR: syntax error at or near "[" Thank you