All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Trying also to figure out the problem. From salesforce topics they state that some permissions are missing on salesforce side. Will update shortly when solution is found
What sorts of results are you trying to post as a note? You can plug just about anything you want into a utility block calling the add note function. You can insert a format block just before the not... See more...
What sorts of results are you trying to post as a note? You can plug just about anything you want into a utility block calling the add note function. You can insert a format block just before the note block and use its formatted_data (not formatted_data.*) output to make it look nicer or combine info from different sources.
Above is the event, not sure why this is showing up as two different events. Anyways, I have written a splunk query according to my requirements but output is not good.  I want to get rid of Servi... See more...
Above is the event, not sure why this is showing up as two different events. Anyways, I have written a splunk query according to my requirements but output is not good.  I want to get rid of Service and Maintenance Start time in MST. Let me summarize the use case: You have ONE single log, Mon Oct 16 07:29:46 MST 2023 MIME-Version: 1.0 Content-Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=black>Hi Team,</br></br>Pleasefind below servers which are in maintenance mode for more than 2 hours; </br></br></font> <tableborder=2> <TR bgcolor=#D6EAF8><TH colspan=2>Cluster Name: AtWork-CIW-E1</TH></TR> <TRbgcolor=#D6EAF8><TH colspan=1>Service</TH><TH colspan=1>Maintenance Start Time inMST</TH></TR> <TR bgcolor=#FFB6C1><TH colspan=1>oozie</TH><TH colspan=1>Mon Oct 16 07:29:46 MST 2023</TH></TR> </table> <font size=3 color=black></br> ScriptPath:/amex/ansible/maintenance_mode_service</font> <font size=3 color=black></br></br>Thankyou,</br>BDP Spark Support Team</font> But Splunk indexer gives you TWO events (with different time values) Mon Oct 16 07:31:53 MST 2023 MIME-Version: 1.0 Content-Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=black>Hi Team,</br></br>Pleasefind below servers which are in maintenance mode for more than 2 hours; </br></br></font> <tableborder=2> <TR bgcolor=#D6EAF8><TH colspan=2>Cluster Name: AtWork-CIW-E1</TH></TR> <TRbgcolor=#D6EAF8><TH colspan=1>Service</TH><TH colspan=1>Maintenance Start Time inMST</TH></TR> Mon Oct 16 07:29:46 MST 2023 <TR bgcolor=#FFB6C1><TH colspan=1>oozie</TH><TH colspan=1>Mon Oct 16 07:29:46 MST2023</TH></TR> </table> <font size=3 color=black></br> ScriptPath:/amex/ansible/maintenance_mode_service</font> <font size=3 color=black></br></br>Thankyou,</br>BDP Spark Support Team</font> You want to use search command to combine data in these two into one table row.  Is this correct? Most importantly, you have a line break problem in ingestion.  This is where you really need to fix.  By default, Splunk has the habit of hunting for timestamp and use it as a clue that a new event exists.  This is why the "second" event has the time Mon Oct 16 07:29:46 MST 2023 which is actually the maintenance start time, not the time of log which should be later, namely Mon Oct 16 07:31:53 MST 2023.  If you do not fix line break problem, there is no end to troubles down the road no matter how many clever ways you can devise to work around it. This said, it is possible to work around this particular log by restoring the complete log using transaction. (Warning: The workaround may break other things.) Second, try not to capture everything by counting word breaks or even HTML tags.  HTML is really the worst enemy of Splunk because HTML's semantics is totally separate from semantics of content.  Always try to anchor regex on 1) content semantics, 2) HTML semantics.  Here is a proposal   | transaction startswith="Script Path" endswith="MIME-Version" | eval _time = _time + duration ``` restore actual event time; this may not be of interest ``` | rex "Cluster Name:\s*(?<ClusterName>[^<]+)" | rex "<TR[^>]*><TH[^>]*>(?<Service>[^<]+)<\/TH><TH[^>]*>(?<MaintenanceStartTime>[^<]+)" | table ClusterName Service MaintenanceStartTime   The two events should give you ClusterName Service MaintenanceStartTime AtWork-CIW-E1 oozie Mon Oct 16 07:29:46 MST 2023 Here is the emulation that you can play with and compare with real data   | makeresults | eval data=split("MIME-Version: 1.0 Content-Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2 hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=black>Hi Team,</br></br>Please find below servers which are in maintenance mode for more than 2 hours; </br></br></font> <table border=2> <TR bgcolor=#D6EAF8><TH colspan=2>Cluster Name: AtWork-CIW-E1</TH></TR> <TR bgcolor=#D6EAF8><TH colspan=1>Service</TH><TH colspan=1>Maintenance Start Time in MST</TH></TR> <TR bgcolor=#FFB6C1><TH colspan=1>oozie</TH><TH colspan=1>Mon Oct 16 07:29:46 MST 2023</TH></TR> </table> <font size=3 color=black></br> Script Path:/amex/ansible/maintenance_mode_service</font> <font size=3 color=black></br></br>Thank you,</br>BDP Spark Support Team</font>", " ") | mvexpand data | eval _time = if(match(data, "Mon Oct 16 07:29:46 MST 2023"), strptime("Mon Oct 16 07:29:46 MST 2023", "%a %b %d %H:%M:%S %Z %Y"), strptime("Mon Oct 16 07:31:53 MST 2023", "%a %b %d %H:%M:%S %Z %Y")) | rename data AS _raw ``` data emulation above ```   Do not forget: Your most important task is to fix line breaks. (There are many guides in Splunk documents, and various answers in this forum.)
Hi All, I have got logs like below:       </tr> <tr> <td ><b><font color=blue>Asia</font></b></td> <td >Samsung_AA</td> <td ><b><font color=green>Singapore</font></b></td> <td ><b><font color="g... See more...
Hi All, I have got logs like below:       </tr> <tr> <td ><b><font color=blue>Asia</font></b></td> <td >Samsung_AA</td> <td ><b><font color=green>Singapore</font></b></td> <td ><b><font color="green">UP</font></b></td> <td >1100</td> <td >311-1000</td> <td >311-1000</td> <td >0-200000</td> <td >3172-3</td> <td >55663</td> <td >NC</td> <td >3.983-20000</td> <td >11112-20000</td> <td >6521-10000</td>       I used below query to get the below table:       ... | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>(?P<Region>[^\<]+)\<\/\w+\>\<\/b\>\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>(?P<VPN_Name>[^\<]+)\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>(?P<Country>[^\<]+)\<\/\w+\>\<\/b\>\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\"\w+\"\>(?P<VPN_Status>[^\<]+)\<\/\w+\>\<\/b\>\<\/td>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\"\w+\"\>[^\<]+\<\/\w+\>\<\/b\>\<\/td>\s+\<td\s\>(?P<Spooled>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Conn_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Conn_SMF_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Conn_Rest_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Queue_Topic>[^\<]+)\<\/td\>\s+\<td\s\>(?P<SMF_SSL>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Rest_SSL>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Spool_Usage_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Ingress_Usage_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Egress_Usage_Max>[^\<]+)\<\/td\>" | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | replace "UAT2-L2" with "NGC" in Region | replace "UAT2-L1" with "GC" in Region | search Region="Asia" | search VPN_Status="UP" | table Time_Stamp,VPN_Name,Spooled,Conn_Max,Conn_SMF_Max,Conn_Rest_Max,Queue_Topic,Spool_Usage_Max,Ingress_Usage_Max,Egress_Usage_Max | dedup VPN_Name       Time_Stamp VPN_Name Spooled Conn_Max Conn_SMF_Max Conn_Rest_Max Queue_Topic Spool_Usage_Max Ingress_Usage_Max Egress_Usage_Max Oct 16, 2023 03:51:08 AM Samsung_AB 0 1-500 1-500 0-200000 3-2 0.000-5000 0-10000 0-10000 Oct 16, 2023 03:51:08 AM Samsung_AA 1100 311-1000 311-1000 0-200000 3172-3 3.983-20000 11112-20000 6521-10000 In this table, I want to color-code the cells of the columns (Conn_Max, Conn_SMF_Max, Conn_Rest_Max, Spool_Usage_Max, Ingress_Usage_max & Egress_Usage_Max), where if first part of the field-value is greater than or equal to 50% & 80% of the second part. For e.g. if Conn_Max is 6500-10000 then it should be in yellow and if it is 8500-10000 then it should be in red  color. Please help me to modify the query or source code so that I can get the required cells color coded as per my requirements. Your kind inputs are highly appreciated. Thank You..!!
Hello everyone, I am concerned about single-event-match (e.g. observable-based) searches and the eventual indexing delay events may have. Would the usage of accelerated DM allow me to just ignore s... See more...
Hello everyone, I am concerned about single-event-match (e.g. observable-based) searches and the eventual indexing delay events may have. Would the usage of accelerated DM allow me to just ignore something like the below, whilst still ensuring that such an event will be anyhow taken into account? If so, how? I read Data Models "faithfully deal with late-arriving events with no upkeep or mitigation required", however I am still concerned on what would happen in a case such as the one depicted in the image I'm uploading, where: - T0 is the moment when the event happened / was logged (_time) - T1 is the first moment taken into account by the search (earliest) - T2 is the moment when the event was indexed (_indextime) - T3 is the last moment taken into account by the search (latest) What about, instead, taking a "larger" time frame for earliest / latest and then focus on the queue of events happened between _index_earliest / _index_latest ? Would this ensure that each and every single event is taken into account with such a search? (Splunk suggests "When using index-time based modifiers such as _index_earliest and _index_latest, [...] you must run your search using All Time", and although I'm not entirely sure about the performances impacts of doing so while still filtering by _indextime, I think it would still be a good idea to account for an ideal maximum events lag, still big but not too big, e.g. 24h, similar to the one mentioned here https://docs.splunk.com/Documentation/Splunk/9.1.1/Report/Durablesearch#Set_time_lag_for_late-arriving_events , for which the surpassing could generate an alert on its own )   Are there different and simpler ways to achieve such mathematic certainty, regardless of the indexing delay? (of course, given that the search isn't skipped)   Thank you all
I would be cautious to anchor regex as closely as the data is regular.  Something like   | rex field=source "\\\t4\\\(apch\\\node|logs)\\\(?<node>[^-\\\\]+)"   This should give node source ... See more...
I would be cautious to anchor regex as closely as the data is regular.  Something like   | rex field=source "\\\t4\\\(apch\\\node|logs)\\\(?<node>[^-\\\\]+)"   This should give node source node06 E:\view\int\t4\apch\node\node06\log\server.log node06 E:\view\int\t4\apch\node\node06\log\run.log node03 E:\view\int\t4\apch\node\node03\log\server.log node01 E:\view\int\t4\apch\node\node01\log\server.log node01 E:\view\int\t4\apch\node\node01\log\run.log core02 E:\view\int\t4\logs\core02-core.log web37 E:\view\int\t4\logs\web37-wfmws.log core01 E:\view\int\t4\logs\core01-core.log You can play with the emulation @ITWhisperer offered and compare with real data.   | makeresults format=csv data="source E:\view\int\t4\apch\node\node06\log\server.log E:\view\int\t4\apch\node\node06\log\run.log E:\view\int\t4\apch\node\node03\log\server.log E:\view\int\t4\apch\node\node01\log\server.log E:\view\int\t4\apch\node\node01\log\run.log E:\view\int\t4\logs\core02-core.log E:\view\int\t4\logs\web37-wfmws.log E:\view\int\t4\logs\core01-core.log" ``` data emulation above ```      
When I use timechart, if some trailing buckets have zero count, they are displayed as zero on the time axis that extends to the end of search window.  But in the same time window, if I use chart over... See more...
When I use timechart, if some trailing buckets have zero count, they are displayed as zero on the time axis that extends to the end of search window.  But in the same time window, if I use chart over _time, trailing zero-count buckets are removed.  For example,   index = _internal earliest=-3h@h latest=+3h@h ``` simulate trailing zero-count buckets``` | timechart span=1h count   This gives _time count 2023-10-19 05:00 33798 2023-10-19 06:00 33798 2023-10-19 07:00 33949 2023-10-19 08:00 27416 2023-10-19 09:00 0 2023-10-19 10:00 0 Note the last two buckets are zero-count.  Whereas this   index = _internal earliest=-3h@h latest=+3h@h ``` simulate zero-count buckets ``` | bucket _time span=1h | chart count over _time   gives _time count 2023-10-19 05:00 33798 2023-10-19 06:00 33798 2023-10-19 07:00 33949 2023-10-19 08:00 27438 The two trailing buckets are not listed, even though info_max_time is exactly the same. Is there a way to force chart to list all _time buckets between info_min_time and info_max_time?
I don't think there is a way to do that in a single search. After all you are looking for specific events (search one) and then trying to expand around each of those events (searches 2+). There is th... See more...
I don't think there is a way to do that in a single search. After all you are looking for specific events (search one) and then trying to expand around each of those events (searches 2+). There is the map command which technically isn't a single search as it runs once for each event up to the maxsearches values. That's really not a great solution as it could easily end up running hundreds of searches to actually be useful with terrible performance. You could reduce the maxsearches but then you would not get data for each event in the base search. The best way I can think to do it would be a dashboard with a base search and drilldowns that pass values to a second search to get more detailed information.
Hello everyone! I create the role for splunk users, which will be able to edit alerts. What capabilities should I choose for such users? To be minimal and sufficient
Hi all, I was trying to get details on some detected backends, for example what was the call type (async, RMI, etc.) and the class:method used for the exit call, typically detailed into transaction ... See more...
Hi all, I was trying to get details on some detected backends, for example what was the call type (async, RMI, etc.) and the class:method used for the exit call, typically detailed into transaction snapshots. Although in the application dashboard these backends are available, there is no way to view only the snapshots that include a specific backend.  Instead one has to go through all the snapshots randomly until it finds the one of interest. The only filtering available on the transaction snaphots currently is via Business Transaction Error Execution Time HTTP request Details Data Collector GUIDs I think the filtering on the backend could help in faster investigation and troubleshooting.
Perhaps if you provided some more realistic (but anonymised) sample events, and a representation of the output you are trying to achieve, we may be able to help you to a solution.
where able to find anything or build anything? 
Here is a runanywhere example showing it working | makeresults format=csv data="source E:\view\int\t4\apch\node\node06\log\server.log E:\view\int\t4\apch\node\node06\log\run.log E:\view\int\t4\apch\... See more...
Here is a runanywhere example showing it working | makeresults format=csv data="source E:\view\int\t4\apch\node\node06\log\server.log E:\view\int\t4\apch\node\node06\log\run.log E:\view\int\t4\apch\node\node03\log\server.log E:\view\int\t4\apch\node\node01\log\server.log E:\view\int\t4\apch\node\node01\log\run.log E:\view\int\t4\logs\core02-core.log E:\view\int\t4\logs\web37-wfmws.log E:\view\int\t4\logs\core01-core.log" | rex field=source "^([^\\\\]+\\\\){5}(?<node>[^-]+)" | rex field=source "^([^\\\\]+\\\\){6}(?<node>[^\\\\]+)" Note if these different formats for source are used in the same search then the order is significant, otherwise just use the relevant rex pertaining to the source name format
I go with the default indexing of raw. However, I had to change my output from key1=value1,key2=value2,key3=value3 (no space after comma) into key1=value1, key2=value2, key3=value3 (space after co... See more...
I go with the default indexing of raw. However, I had to change my output from key1=value1,key2=value2,key3=value3 (no space after comma) into key1=value1, key2=value2, key3=value3 (space after comma)
I have configured Oauth in a custom account in the splunk salesforce Add-On app.  After configuring the account and saving the configuration it reaches out to salesforce.  I login to salesforce and i... See more...
I have configured Oauth in a custom account in the splunk salesforce Add-On app.  After configuring the account and saving the configuration it reaches out to salesforce.  I login to salesforce and its states grant access.  Once I click submit it comes back with an error   "Error occurred while trying to authenticate. Please try Again" in the app.  I am not sure what the issue is or if this is a need to configure something on the salesforce side. 
Anything written by a script to stdout is indexed as a raw event by Splunk.  You can use props.conf settings to extract fields from the event.  By default, Splunk will extract key and values that are... See more...
Anything written by a script to stdout is indexed as a raw event by Splunk.  You can use props.conf settings to extract fields from the event.  By default, Splunk will extract key and values that are in key=value format, so perhaps your PS script could do that.
@ITWhisperer  I tried using above rex for these log source but not working: For below 5 different log source I like to extract node number like node06, node03, node01 E:\view\int\t4\apch\node\node... See more...
@ITWhisperer  I tried using above rex for these log source but not working: For below 5 different log source I like to extract node number like node06, node03, node01 E:\view\int\t4\apch\node\node06\log\server.log E:\view\int\t4\apch\node\node06\log\run.log E:\view\int\t4\apch\node\node03\log\server.log E:\view\int\t4\apch\node\node01\log\server.log E:\view\int\t4\apch\node\node01\log\run.log For below 3 log source I like to extract as core02, web37, core01 E:\view\int\t4\logs\core02-core.log E:\view\int\t4\logs\web37-wfmws.log E:\view\int\t4\logs\core01-core.log   Since both log format is different above solution you shared is not working. Please help
Thank you @ITWhisperer.  I was running stats again to capture count which was already present in the data along with hour as mentioned by you Here is final query : index=summary_index_1d "value=Su... See more...
Thank you @ITWhisperer.  I was running stats again to capture count which was already present in the data along with hour as mentioned by you Here is final query : index=summary_index_1d "value=Summary_test" app_name=abc HTTP_STATUS_CODE=2xx | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | eval day=strftime(_time, "%d")| eval dayOfWeek = strftime(_time, "%u") | where dayOfWeek >=1 AND dayOfWeek <= 5 | stats avg(count_value) by log_day,hour,day Let me know if any other changes are required on query which can improve its performance.  Thanks Again.  
We are utilizing the Log Event Trigger Action for an alert and we'd essentially like to duplicate the event that's found into another index. There is some renaming that happens in the alert, so pulli... See more...
We are utilizing the Log Event Trigger Action for an alert and we'd essentially like to duplicate the event that's found into another index. There is some renaming that happens in the alert, so pulling the_raw wouldn't include the renamed fields correct? If _raw is the way to go, what is the token for this? $result._raw$?
I've tried identifying all individual fields in events and extracted using rex.   | rex "\s\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>" | rex "\s\<externalFactorReturn\>(?<externa... See more...
I've tried identifying all individual fields in events and extracted using rex.   | rex "\s\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>" | rex "\s\<externalFactorReturn\>(?<externalFactorReturn>.*)\<\/externalFactorReturn\>" | rex "\<current\>(?<current>.*)\<\/current\>" | rex "\<encrypted\>(?<encrypted>.*)\<\/encrypted\>" | rex "\<keywordp\>(?<keywordp>.*)\<\/keywordp\>" | rex "\<pepres\>(?<pepres>.*)\<\/pepres\>" | rex "\<roleName\>(?<roleName>.*)\<\/roleName\>" | rex "\<boriskhan\>(?<boriskhan>.*)\<\/boriskhan\>" | rex "\<sload\>(?<sload>.*)\<\/sload\>" | rex "\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>" | rex "\<parkeristrator\>(?<parkeristrator>.*)\<\/parkeristrator\>"