All Topics

Top

All Topics

My org has had a problem for awhile now where our Splunk logs pulled from SF are delayed between 1-2 hours. We are using the Splunk Add-On for Salesforce and the delayed logs are coming from ApexCall... See more...
My org has had a problem for awhile now where our Splunk logs pulled from SF are delayed between 1-2 hours. We are using the Splunk Add-On for Salesforce and the delayed logs are coming from ApexCalloutEvent. From speaking to SF and Splunk we were given a few options which I detail below: 1. Research if SF can stream events from the logs to the Splunk HTTP event collector (push). 2. Get with our dev teams and have them take a copy of the Splunk Add-On and customize the parameters for monitoring from hourly to minutes.  I am open to any ideas. I just have not found much on this in forums or the Splunk community. 
I noticed that when performing a rolling upgrade of search head cluster, it does not automatically transfer captaincy to the first upgraded member. Based from https://docs.splunk.com/Documentation/Sp... See more...
I noticed that when performing a rolling upgrade of search head cluster, it does not automatically transfer captaincy to the first upgraded member. Based from https://docs.splunk.com/Documentation/Splunk/9.0.2/DistSearch/SHCrollingupgrade - The first upgraded member is elected captain when that member restarts after upgrade. This captaincy transfer occurs only once during a rolling upgrade.  I have chosen a non-captain search head as the first member to upgrade.
Hi, Need a search for the below scenario, If a previously assigned alert is reassigned to a different user on the portal, it will trigger a new alert because the updated time is considered in the c... See more...
Hi, Need a search for the below scenario, If a previously assigned alert is reassigned to a different user on the portal, it will trigger a new alert because the updated time is considered in the cs. For example, with alert  was initially detected on the  portal  However, when I reassigned the alert to myself last week, a new alert was generated based on the updated time field. Thanks
We have this scenario where two nested macros using the same argument raises an error at the parsing of the second one because of the double quotes in the arguments dissapear. If we define the two... See more...
We have this scenario where two nested macros using the same argument raises an error at the parsing of the second one because of the double quotes in the arguments dissapear. If we define the two macros as the following: "test_args2(1)" -> `test_args("$args$")` "test_args(1)" -> | eval arguments = $args$ And then run the following search: | makeresults `test_arguments2("[| makeresults | eval arguments = tostring(floor(relative_time(_time, "@y"))) | return $arguments]")` | table arguments It raises the following error: And the search log shows that after expanding the first macro then it removes the double quotations from the argument: PARSING: | makeresults \n`test_arguments2("[| makeresults | eval arguments = tostring(floor(relative_time(_time, "@y"))) | return $arguments]")`\n| table arguments AFTER EXPANDING MACROS: | makeresults \n| eval arguments = [| makeresults | eval arguments = tostring(floor(relative_time(_time, @y))) | return $arguments]\n| table arguments It doesnt work either if we scape the doble quotes as: | makeresults `test_arguments2("[| makeresults | eval arguments = tostring(floor(relative_time(_time, \"@y\"))) | return $arguments]")` | table arguments​ Has anyone encounter a similar issue? How did you get around it or solve it?
Hi All,   We have an index that is growing very rapidly and we want to now move it to a new partition specifically for that index in our cluster ,what steps should i follow to do that and with mi... See more...
Hi All,   We have an index that is growing very rapidly and we want to now move it to a new partition specifically for that index in our cluster ,what steps should i follow to do that and with minimal downtime  ?   Thanks and regards, Chiranjeev Singh
My log contains entries as shown below. 2023-03-03T14:14:12.718, Level=INFO, ProcessName=App-web, Thread=http-nio-80-exec-78, Code=c.m.Config, Message={"clientIp":"192.168.12.24","cost":1,"method":"... See more...
My log contains entries as shown below. 2023-03-03T14:14:12.718, Level=INFO, ProcessName=App-web, Thread=http-nio-80-exec-78, Code=c.m.Config, Message={"clientIp":"192.168.12.24","cost":1,"method":"GET","reqParam":{"userId":["25632"]},"resp":"{\"code\":1,\"data\":{\"list\":[{\"createDate\":1656942857926,\"groupId\":1000023,\"id\":1173,\"lastUpdate\":16569","user":"myemail@hotmail.com"} I want to know how many users are using the application in last one hour.
Hi All, we have events like below and in these need to extracts below id"s example d1c35370-1522-498c-8a79-ab07909a1c4a  as new fields with in the status is running   we have muliple ID"S like th... See more...
Hi All, we have events like below and in these need to extracts below id"s example d1c35370-1522-498c-8a79-ab07909a1c4a  as new fields with in the status is running   we have muliple ID"S like this in the event  status is like running and Collector is running in field  it will also show if value other than running     2023-03-03T08:19:31,693 [INFO] [prod] [2f78061f-5f51-4636-8da1-3c9644b9e7a1] [34d3d64e-01c8-428e-a7b1-8b414dbd5478] [agent-AgentDataSourceStateManagerActor] - All collector health status has been updated- stateMap: [Map(d55c495c-52da-4e57-bc83-2ee02e92d978 -> running, 8194d562-beb4-4a44-a7f3-ec92ed549b3c -> running, e6f1b795-bf44-4640-880f-8b32f69586b7 -> running, 08ff35ad-f7b8-4ef2-bf29-1ccf5e50caad -> running, 4925c2fc-7f47-46e5-9a78-63e596bb469a -> running, d1c35370-1522-498c-8a79-ab07909a1c4a -> running, 8e7f28fa-26e9-445a-a5b3-50e5746ca8ca -> running, db52b5b0-31b2-43dc-8887-9f2859762a62 -> running)], statusMap: [Map(d55c495c-52da-4e57-bc83-2ee02e92d978 -> Collector is running., 8194d562-beb4-4a44-a7f3-ec92ed549b3c -> Collector is running., e6f1b795-bf44-4640-880f-8b32f69586b7 -> Collector is running., 08ff35ad-f7b8-4ef2-bf29-1ccf5e50caad -> Collector is running., 4925c2fc-7f47-46e5-9a78-63e596bb469a -> Collector is running., d1c35370-1522-498c-8a79-ab07909a1c4a -> Collector is running., 8e7f28fa-26e9-445a-a5b3-50e5746ca8ca -> Collector is running., db52b5b0-31b2-43dc-8887-9f2859762a62 -> Collector is running.)]
I have below logs  Status: INFORMATION: Description: Beginning GDP Fransaction Script: 01-22-2023-01-13-04-PM Status: INFORMATION: Description: txt file already exists Status: INFORMATION: Desc... See more...
I have below logs  Status: INFORMATION: Description: Beginning GDP Fransaction Script: 01-22-2023-01-13-04-PM Status: INFORMATION: Description: txt file already exists Status: INFORMATION: Description: csv file already exists Status: OK: Description: C:\GDPFransactionScript\Inputs \GDPTestFile.csv copy to USB successful Status: OK: Description: C:\GDPTransactionScript\Inputs \GDPTestFile.txt copy to USB successful Status: ERROR: Description: http POST failed: Status: ERROR: Description: https POST failed: Status: INFORMATION: Description: End of GDP Transaction Script: 01-22-2023-01-13-04-PM   I have mentioned in my props  CHARSET=AUTO SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\Status NO_BINARY_CHECK=true disabled=false TIME_PREFIX=^ But i am seeing error like failed to parse timestamp. Default to file modtime How to resolve this issue  
Hi, I have a test instance of splunk - fresh out of the box. Only configure the essentials and imported a dump from the  OpenLibrary.org. I have the problem that every single line should be one even... See more...
Hi, I have a test instance of splunk - fresh out of the box. Only configure the essentials and imported a dump from the  OpenLibrary.org. I have the problem that every single line should be one event but plenty of lines are merged together and I can't figure out why. The source data has reliable linebreaks in so the default should work. I had the same issue in my enterprise environment and nobody could tell me why this is happening. After a while it magically disappeared and as far as I can tell it has vanished for good but I'd like to understand the reason why this is happening and prevent it from happening again. Regards Thorsten
Hi All, I have 4 indexes: - index1 index2 index3 index4 Each index has its own search criteria, there are some common field names and some distinct field names. index1: - _time, field1, fiel... See more...
Hi All, I have 4 indexes: - index1 index2 index3 index4 Each index has its own search criteria, there are some common field names and some distinct field names. index1: - _time, field1, field2 index2: - _time, field3, field4 index3: - _time, field4, field2 index4: - field5, field2, field6, field7, field8, field9, field10 I tried to use multisearch command to merge the results for indexes. |multisearch [search index=index1 TERM(str1) TERM(str2) NOT TERM(str3) NOT TERM(str4)] [search index=index2 sourcetype="xxx" fieldName=TERM(str5) TERM(str6)] [search index=index3 sourcetype="yyy" TERM(fieldName1=str7)] [search index=index4 TERM(fieldName2=123) TERM(fieldName3=str7) OR TERM(fieldName4=str7) TERM(fieldName=str5)] |head 6000 |sort 0 -_time |stats first(_time) AS _time, first(field1) AS field1, first(field2) AS field2, first(field3) AS field3, first(field4) AS field4, first(field5) AS field5, first(field6) AS field6, first(field7) AS field7, first(field8) AS field8, first(field9) AS field9, first(field10) AS field10 BY index   I do get results for all indexes, but following issues are faced: - The SPL runs too slowly to fetch the data. I do not get the values for all expected fields of the index. For example, in index4, I do not get data for field8 and field9. But, if I increase the number of events limit in head from 6000 to 10000, I get the data for those two fields as well.  Thus, I need your help with following: - To understand and implement a better way of merging Splunk search results. To improve the performance of SPL while merging the results. Thank you Taruchit
HI there, I've created a multi select input called Source and the Data Configurations is set to a search and Use search results or job status as tokens is ticked (i've tried with this unticked). ... See more...
HI there, I've created a multi select input called Source and the Data Configurations is set to a search and Use search results or job status as tokens is ticked (i've tried with this unticked). The source filter shows the values I expect to see but nothing happens when I tick one or more of the options.   Not sure if the code below is relevant but I am adding it in case it helps     "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "@d,now" }, "title": "Global Time Range" }, "input_OvTI7NW3": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "ms_monMf0CU", "defaultValue": "" }, "title": "Source", "type": "input.multiselect", "dataSources": { "primary": "ds_ESObVruQ" }, "context": { "formattedConfig": { "number": { "prefix": "" } },      
Short Description In short we have a particular search that we want to run during a specific period, and we want that search to stop after that specific period. Long Description We are trying t... See more...
Short Description In short we have a particular search that we want to run during a specific period, and we want that search to stop after that specific period. Long Description We are trying to introduce some automation into our Splunk searches at the moment and one of the searches we are looking to automate forms part of our call out process. We get called out for a specific event where we then have to run a few searches and set the time frame manually.  The event that we get called out for is triggered once the LoginLimit > 30 which we can view on a line graph. We know the event has ended because the LoginLimit falls back below 30. What we are trying to achieve is to somehow automate the searches when the LoginLimit > 30 and stop the search once the LoginLimit < 30 again. Then output the results. Hopefully I've articulated this as clearly as possible, but if not I'll do my best to clear things up
Hi All, i love link lists, however i cannot find the apropriate pseude type to manipulate the appearance of the selected Item. on this site: https://www.mediaevent.de/css/css-selektor-pseudo.h... See more...
Hi All, i love link lists, however i cannot find the apropriate pseude type to manipulate the appearance of the selected Item. on this site: https://www.mediaevent.de/css/css-selektor-pseudo.html there are examples of pseudo types and "hover" and "selected" work like a charme, however the link list loses the ccs styles once something else is selected in the dashboard. Can anyone help? Kind regards, Michael
Hi team   We are using Splunk Enterprise version 9.0.4   The Alerts Mails are not getting triggered in the Splunk system    We are getting the following error   sendemail:560 - [WinError 100... See more...
Hi team   We are using Splunk Enterprise version 9.0.4   The Alerts Mails are not getting triggered in the Splunk system    We are getting the following error   sendemail:560 - [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond while sending mail to  
More concise... Splunk is a tool that can analyze and visualize various types of data. The advantage of visualization is that you can understand what you could not understand before. By showing th... See more...
More concise... Splunk is a tool that can analyze and visualize various types of data. The advantage of visualization is that you can understand what you could not understand before. By showing the visualized results to others, we can have a common understanding and collaborate with each other. I think this is one of the reasons why Splunk is a very useful tool. However, as you become more familiar with Splunk's Search Processing Language (SPL), it tends to become longer and more cluttered. Want to write SPL more concisely! Start by making the numbers manageable. Since Splunk can be described as a tool with a search function + analysis function, it handles various types of numbers depending on how the tool is used. This time, I’ve created a macro library for unit conversion from the point of view what can be done to handle these "various types of numbers" in a more concise manner. Area: How many square feet is a square meter (㎡) ? Length: How many centimeters is an inch? How many feet, how many yards? Volume: How many liters are in a gallon? Temperature: How many degrees Celsius (°C) in Fahrenheit (°F)? How many Kelvin (K)? Data transfer rate: How many byte/s is a Mbps? The following other units of measure are also supported for conversion. (Units in the same category can be converted to each other.) List of units that can be converted by macro category unit Area Acre[ac], Hectare[ha], Square foot[sq ft], Square inch[sq in], Square killometer[km2], Square meter[m2], Square mile[sq mi], Square yard[sq yd], Tatami mats[畳], Tsubo[坪] Data Transfer Rate Bit per second[bps], Byte per second[B/s], Gibibit per second[GiBit/s], Gigabit per second[Gbps], Gigabyte per second[GB/s], Kibibit per second[KiBit/s], Kibibyte per second[KiB/s], Kilobit per second[kbps], Kilobyte per second[KB/s], Mebibit per second[MiBit/s], Mebibyte per second[MiB/s], Megabit per second[Mbps], Megabyte per second[MB/s], Tebibit per second[TiBit/s], Tebibyte per second[TiB/s], Terabit per second[Tbps], Terabyte per second[TB/s] Digital Storage Bit[bit], Byte[bytes], Gibibit[GiBit], Gibibyte[GiB], Gigabit[Gbit], Gigabyte[GB], Kibibit[KiBit], Kibibyte[KiB], Kilobit[Kbit], Kilobyte[KB], Mebibit[MiBit], Mebibyte[MiB], Megabit[Mbit], Megabyte[MB], Pebibit[PiBit], Pebibyte[PiB], Petabit[Pbit], Petabyte[PB], Tebibit[TiBit], Tebibyte[TiB], Terabit[Tbit], Terabyte[TB] Energy British thermal unit[Btu], Electronvolt[eV], Foot-pound[ft lbf], Gram calorie[cal], Joule[J], Kilocalorie[kcal], Kilojoule[kJ], Kilowatt hour[kW-h], US therm[therm(US)], Watt hour[W-h] Frequency Gigahertz[GHz], Hertz[Hz], Kilohertz[kHz], Megahertz[MHz] Fuel Economy Kilometer per liter[km/L], Liter per 100 kilometers[l/100km], Miles per gallon (imperial)[mpg (Imp)], Miles per gallon[mpg (US)] Length Centimeter[cm], Foot[ft], Inch[in], Kilometer[km], Meter[m], Micrometer[μm], Mile[mi], Millimeter[mm], Nanometer[nm], Nautical mile[nmi], Yard[yd] Mass Gram[g], Imperial ton[long tn], Kilogram[kg], Metric ton[t], Microgram[μg], Milligram[mg], Ounce[oz av], Pound[lb av], Stone[st], US ton[sh tn] Plane Angle Arcsecond[″], Degree[°], Gradian[grad], Milliradian[μ], Minute of arc[′], Radian[rad] Pressure Bar[bar], Pascal[Pa], Pound-force per square inch[psi], Standard atmosphere[atm], Torr[Torr] Speed Foot per second[fps], Kilometer per hour[km/h], Knot[kn], Meter per second[m/s], Mille per hour[mph] Temperature Degree Celsius[°C], Fahrenheit[°F], Kelvin[K] Time Calender year[y], Century[C], Day[d], Decade[decade], Hour[h], Microsecond[μs], Millisecond[ms], Minute[min], Month[mo], Nanosecond[ns], Second[s], Week[wk] Volume Cubic foot[cu ft], Cubic inch[cu in], Cubic meter[m3], Imperial cup[c (Imp)], Imperial fluid ounce[fl oz (Imp)], Imperial gallon[gal (Imp)], Imperial pint[pt (Imp)], Imperial quart[qt (Imp)], Imperial tablespoon[tbsp (Imp)], Imperial teaspoon[tsp (Imp)], Liter[l], Milliliter[ml], US fluid ounce[US fl oz], US legal cup[c (US)], US liquid gallon[gal (US)], US liquid pint[pt (US fl)], US liquid quart[qt (US)], US tablespoon[tbsp (US)], US teaspoon[tsp (US)]   A bit useful dashboard where you can find macros and learn how to use them It can also be used as a quick reference table for unit conversions. Conversion of units   Drill down (click on the table) to open a sample SPL in a search. Usage example Legend of macro usage | eval FieldAfterConversion = `numeral_UnitBefore_to_UnitAfter(FieldBeforeConversion)`   There is also a dashboard for checking how to use macros that display large numbers with expression such as "150 million", and macros that convert the number of bytes into human easy readable units. large numbers to human readable   When you finish researching and decide that you don't need the UI screen anymore, you can hide the UI and continue using only the macros. Steps to hide apps: "Manage App" → "Edit Properties" of Numeral system macros for Splunk → Visible: Check "No" → "Save"   Let's give it a try! Numeral system macros for Splunk https://splunkbase.splunk.com/app/6595 (Please rate this app ★★★★★!) 
Hi All I have one query with regards to Log Monitoring Let's say I want to monitor abc.log and the last Updated date of the Log File is Aug 2022 or Sep 2022 and I install the UF in the Log server... See more...
Hi All I have one query with regards to Log Monitoring Let's say I want to monitor abc.log and the last Updated date of the Log File is Aug 2022 or Sep 2022 and I install the UF in the Log server in Feb 2023 and create inputs monitoring for abc.log Does splunk monitor the old data which is already there in the Log file from Aug or Sep 2022 and show the Logs in Splunk?
Hi I am trying to publish a technical add-one for Splunk Enterprise. Any help with this response would be much appreciated.    This is feed back I have received from the Splunk Team: Technical ... See more...
Hi I am trying to publish a technical add-one for Splunk Enterprise. Any help with this response would be much appreciated.    This is feed back I have received from the Splunk Team: Technical Add-On for Splunk did not qualify for Splunk Cloud compatibility for the following reasons: check_for_supported_tls If you are using requests. post to talk to your own infra with non-public PKI, make sure you bundle your own CA certs as part of your app and pass the path into requests.post as an arg. File: bin/sirp.py Line Number: 241  
I want my legend of bar chart to be on the bottom-left so is there any xml code to do it that way ?
Hi Team      Facing issue in the Mail Trigger .       SMTP Connections are valid but mail is not triggered and receiving the following error command="sendemail", [WinError 10060] A connect... See more...
Hi Team      Facing issue in the Mail Trigger .       SMTP Connections are valid but mail is not triggered and receiving the following error command="sendemail", [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond while sending mail Splunk Version : 9.0.4 Kindly support on this 
Returns thousands of entries: index=myindex sourcetype=mysourcetype Returns all (8 atm) uuid values and all starts with '211d' index=myindex sourcetype=mysourcetype | table uuid | dedup uuid ... See more...
Returns thousands of entries: index=myindex sourcetype=mysourcetype Returns all (8 atm) uuid values and all starts with '211d' index=myindex sourcetype=mysourcetype | table uuid | dedup uuid 211d644bc2 211d788fa3 211d520cc2 etc. These returns nothing. 0 matches found for the same time period as the previous two queries: index=myindex sourcetype=mysourcetype uuid=211d* index=myindex sourcetype=mysourcetype uuid="211d*" index=myindex sourcetype=mysourcetype uuid=211d% index=myindex sourcetype=mysourcetype uuid="211d%"   Why is this? Is it an indexing issue?