All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello  I am trying to extract count of the data by excluding some values which are not equal and some are equal in particular filed   My query    index=platform source=`ProjectArea` |join type=inn... See more...
Hello  I am trying to extract count of the data by excluding some values which are not equal and some are equal in particular filed   My query    index=platform source=`ProjectArea` |join type=inner ProjectAreaID max=0 [search index=`platform` source=`RequirementModules` |fillnull value="Not Defined"|search Owner!="Tool" |join type=inner ModuleID max=0 [search index=`platform` source=`SoftwareRequirements` `ReqID_URL_Rename`|rename Owner as ReqOwner ]]|search `Software_ModuleType` `SoftwareRequirementType` |fillnull value="Not Defined"|dedup LinkStart_URL|search ModuleID="*" Status="*" Owner="*" | join type=inner LinkStart_URL max=0 [search index=`platform` source=Sw_Satisfaction`|rename LinkEnd_URL as SysURL]|join type=inner SysURL max=0 [search index=`platform` source=`SystemRequirements` `ReqID_URL_Rename` |rename LinkStart_URL as SysURL] |search ModuleName!="A*" AND ModuleName="*_ext" |stats count by ModuleName  Output  comes zero when i give  (ModuleName!="A*" AND ModuleName="*_ext") this condition My output contains A* values,V* and A*_ext  i want to exculde only A* values Please help with this Thank You in advance Renuka
I believe I found a logic flaw in the All Incidents dashboard in the Palo Alto Networks App for Splunk.  The flaw seems to affect two panels: Endpoint Incidents Per Hour Aperture SaaS Incidents Pe... See more...
I believe I found a logic flaw in the All Incidents dashboard in the Palo Alto Networks App for Splunk.  The flaw seems to affect two panels: Endpoint Incidents Per Hour Aperture SaaS Incidents Per Hour I noticed that those two panels continuously refresh, even though I'm not refreshing the page. These two panels both have condition logic:   <condition match="'job.resultCount' == 0"> <set token="results-3">1</set> </condition> <condition match="'job.resultCount' != 0 AND 'results-3' == 1"> <set token="results-3">0</set> </condition>   The first time the panel is loaded and there are no matching events in the data model, then the 'job.resultCount' == 0 condition matches and results-3 is set to 1. When the panel refreshes, | makeresults creates at least one output row, which means the second condition will always match. So the panel refreshes and the cycle starts over. Instead of 'job.resultCount' != 0, I think the second condition needs to be 'job.resultCount' > 1. If there are matching no events in the data model then | makeresults will still produce a single output row, but the second condition ('job.resultCount' > 1) will no longer match which stops the repeat cycle. If there IS an event in the datamodel, there will be at least two rows of output, which also stops the cycle. I think there is a problem here, but I'm not sure if I have it exactly right. It seems to stop the cycle after I changed != to >. What do you think?
Hi, I have a simple json like below ,   {"env":"p1","label":"1788_kapi_fed","App":"admin-ipo-sel","lastUpdate":"2020-10-18 19:03:16.956","region":"ea"}{"env":"p1","label":"1788_kapi_fed","App":"ad... See more...
Hi, I have a simple json like below ,   {"env":"p1","label":"1788_kapi_fed","App":"admin-ipo-sel","lastUpdate":"2020-10-18 19:03:16.956","region":"ea"}{"env":"p1","label":"1788_kapi_fed","App":"admin-ipo-sel","lastUpdate":"2020-10-19 18:29:43.136","region":"ea"}{"env":"p2","label":"1788_kapi_fed","App":"admin-ipo-sel","lastUpdate":"2020-10-19  19:29:45.136","region":"ea"} timestamp field   -  "lastUpdate":"2020-10-19  19:29:43.136" trying to define timestamp in props file with below stanza but not working   INDEXED_EXTRACTIONS = json KV_MODE = none AUTO_KV_JSON = false TIME_PREFIX = lastUpdate\":\"    tried with this as well (TIME_PREFIX = "lastUpdate":") TIME_FORMAT = %Y-%m-%d  %H:%M:%S.%N MAX_TIMESTAMP_LOOKHEAD = 23 No events are coming to splunk with above data. only below one is working to push data to splunk , Can any please suggest whats going wrong here   INDEXED_EXTRACTIONS = json KV_MODE = none AUTO_KV_JSON = false TZ = Asia/Singapore    
I can able to search from splunk web using the below string: cs_uri_stem="*/reporting/rptttt.xls" AND (cs_uri_query="reportName=ddd+Certification")|stats count by AssociateOID, OrgOID, date, o, repo... See more...
I can able to search from splunk web using the below string: cs_uri_stem="*/reporting/rptttt.xls" AND (cs_uri_query="reportName=ddd+Certification")|stats count by AssociateOID, OrgOID, date, o, reportName but when i use the same search string while REST API call's its not working. curl -ku username:paswd https://splunkapiurl:port/servicesNS/admin/search/search/jobs/export -d search=“search cs_uri_stem="*/reporting/rptttt.xls" AND (cs_uri_query="reportName=ddd+Certification")|stats count by AssociateOID, OrgOID, date, o, reportName” -d output_mode=csv Please help me out resolving the issue.
We have a 2 site  multisite cluster with the following cluster configuration. The cluster contains 30 indexers total, 15 at each site. There is over a petabyte of data stored across the two sites. ... See more...
We have a 2 site  multisite cluster with the following cluster configuration. The cluster contains 30 indexers total, 15 at each site. There is over a petabyte of data stored across the two sites.     [clustering] cluster_label = StarfishCluster mode = master multisite = true replication_factor = 2 search_factor = 2 available_sites = site1,site2 site_replication_factor = origin:1,site1:1,site2:1,total:2 site_search_factor = origin:1,site1:1,site2:1,total:2       We may have to move the servers that are in site 1 from one datacenter to another data center. The data center is several hundred miles away so these servers will be offline for over a week. How can we safely take down a site, then re-enable that site at a later time without losing data or having Splunk encounter issues with ingest. As a test we have put the cluster in maintenance mode then taken down all of the hosts within a single site but Splunk stopped indexing data in the other site which remained online. We also experienced an increase in resources which was expected. Is there documentation available on exactly how to safely take down a site without impacting indexing and search availability?
Hello! I have a problem with dashboard reports where the verbose mode shows the latest data, while the fast/smart which is somehow the default, shows like 10 minutes old data. I tried a couple of se... See more...
Hello! I have a problem with dashboard reports where the verbose mode shows the latest data, while the fast/smart which is somehow the default, shows like 10 minutes old data. I tried a couple of settings and even change my query, but nothing helped. If I restart the splunk, then the data is ok for a couple of minutes, then it will start to show delayed the data. Does anyone can advise what needs to be changed in order to run always in verbose mode?   index=MYINDEX | lookup code_lookup code OUTPUTNEW t_name | eval code=IF(ISNULL(t_name),code,t_name) | lookup v_lookup v OUTPUTNEW v_name | eval v=IF(ISNULL(v_name),v,v_name) | eval CMinutes = ceil(duration / 60) | rex field=d_num mode=sed "s/(\d{4,})[0-9#]{4}$/\1####/g" | rename date as Date-mmddyy, time as Time, tag::cond_code as "CType", c_num as "C Number", d_num as "D Number", duration as "Seconds", CMinutes as "Minutes", code_used as Trunk, vdn as VDN | table Date-mmddyy, Time, "CType", "Seconds", "CMinutes", "CNumber", "DNumber", T, V | fields Date-mmddyy, Time, "CType", "Seconds", "CMinutes", "CNumber", "DNumber", T, V | head 100 | sort -Time    Thank you!
we have three management servers need to see to which our spunk agent deployed in new server is pointing to  Saw below in splunkd logs is there where splunk is pointing to now please suggest /servi... See more...
we have three management servers need to see to which our spunk agent deployed in new server is pointing to  Saw below in splunkd logs is there where splunk is pointing to now please suggest /servicesNS/nobody/SplunkUniversalForwarder/admin/deploymentclient/deployment-client" "targetUri=:"server xxxx"
We recently installed the Splunk app and add-on for Infrastructure and have found that the app is very similar to Analytics Workspace, will one eventually replace the other?  Analytics Workspace come... See more...
We recently installed the Splunk app and add-on for Infrastructure and have found that the app is very similar to Analytics Workspace, will one eventually replace the other?  Analytics Workspace comes with Splunk.  What are the differences between the two?  Outside of the entities and grouping entities I don't see many differences.   Thanks...Alisa
Hi , So if I click at Success/Failure I'm able to get all the transaction IDs which have status Success/Failure, But if i chose Total_Transaction I'm unable to change it to "*" using eval.  ... See more...
Hi , So if I click at Success/Failure I'm able to get all the transaction IDs which have status Success/Failure, But if i chose Total_Transaction I'm unable to change it to "*" using eval.  Below is the query, Im using drilldown for column name and column value, since its taking column value as Total_Transaction its unable to trace to *. Need help with the eval if query. Query- index="int_gcg_apac_pcf_application_dm_169688" cf_org_name="CM-AP-SIT2" cf_space_name="166190_GCESMS" ESMS_MainMethod=doLostStolen OR ESMS_MainMethod=saveReqest OR ESMS_MainMethod=updateTempCreditLimit |stats dc(ESMS_TransactionID), sum(ESMS_ResponseTime), count(ESMS_StatusSuccess), count(ESMS_StatusFailure) as count by ESMS_MainMethod | rename ESMS_MainMethod as MicroService dc(ESMS_TransactionID) as Total_Transaction sum(ESMS_ResponseTime) as Total_Time count(ESMS_StatusSuccess) as Sucess count as Failure | eval "Success%"=((Success/Total_Transaction)*100) , "Failure%"=((Failure/Total_Transaction)*100), "Avg"=(Total_Time/Total_Transaction) | replace loadCardProfile1 with ESMS_CardProfile processCriteria with ESMS_GBCR saveReqest with ESMS_DB_Service doLostStolen with ESMS_LostStolen_Service doRetailConversion with ESMS_RetailConversion updateTempCreditLimit with ESMS_TempCreditLimit_Service  
Hi All,  I have two query as below.   index is same, where as sourcetype and source is different on both query. There is field call "Vserver"( After rename )  and "host" whose value  is same on bot... See more...
Hi All,  I have two query as below.   index is same, where as sourcetype and source is different on both query. There is field call "Vserver"( After rename )  and "host" whose value  is same on both the query. This field can be taken as a reference for both the query. I want to combine the result of the both the query  so that, I can have  query-2  "vol_count"  output in query-1 table output.    can anyone please help me .  Thanks and Regards Shyam query-1 :-  index=infra_netapp sourcetype="ontap:vserver" source="vserver-get-iter" | rename vserver-name AS Vserver | dedup Vserver | regex Vserver="^([a-zA-Z]+)-([a-z]{0,2})([1-9]{1,2})pri(\d{1,4})"  | eval VserverCatagory=case( match(Vserver, "^([a-zA-Z]+)-([a-z]{0,2})HD(\d{1,4})"), "Home", match(Vserver,"^([a-zA-Z]+)-([a-z]{0,2})GD(\d{1,4})"), "GD", match(Vserver,"^([a-zA-Z]+)-([a-z]{0,2})AD(\d{1,4})"), "AD", match(Vserver,"^([a-zA-Z]+)-([a-z]{0,2})UD(\d{1,4})"), "UD", 1==1,"Unknown") | table host, Vserver,vserver-type,state,VserverCatagory,operational-state   Query-2 :-  index=infra_netapp sourcetype="ontap:volume" source="volume-get-iter" | rename volume-id-attributes.name as Volume, volume-id-attributes.owning-vserver-name as Vserver | regex Vserver="^([a-zA-Z]+)-([a-z]{0,2})([1-9]{1,2})pri(\d{1,4})" | stats dc(Volume) AS vol_count BY host, Vserver  
  The JDBC driver works fine. The query renders the right data The issue is the new input cant be saved due to "Error There was an error processing your request" I've found some old community po... See more...
  The JDBC driver works fine. The query renders the right data The issue is the new input cant be saved due to "Error There was an error processing your request" I've found some old community posts that mention the configuration just needs to be manually applied to the inputs.conf file instead of using the GUI?  Is this valid?  If so would someone mind sharing the output of a inputs.conf file for SQL &/Or MongoDB?
Hello guys, I implemented a Pan and Zoom functionality this way: I have a line graph that shows some data and a table below that shows raw events within selected time range. It works as expected, bu... See more...
Hello guys, I implemented a Pan and Zoom functionality this way: I have a line graph that shows some data and a table below that shows raw events within selected time range. It works as expected, but I would like to modify it a bit. It should work just like this: pan and zoom problem - Splunk Community Unfortunately nobody answered, maybe because of rude formulation. So, to make it clearer, instead of zooming this way: I am looking for a way to zoom like in normal graph (change x axis, not show two markers) so I can see more detailed data. Any ideas how to achieve that? Thank you!
Howdy, Basically, what I'm trying to achieve is putting all events into 2 buckets, based on the `tracking policies`, and then count the number of clicks for each bucket (so the sum of events from ... See more...
Howdy, Basically, what I'm trying to achieve is putting all events into 2 buckets, based on the `tracking policies`, and then count the number of clicks for each bucket (so the sum of events from each event within the buckets). There is example event data below. Disclaimer: This data isn't real, it just follows the same structure that I'm using IRL. So there may be some typos/incorrect naming/wonkiness.   event 1 name: Guy address: 1234 Street Ave age: 32 additional_info (json): "{ "tracking_policies": ["no track", "light tracking",] }" tracking (json): "{ user_interactions: "{ "clicks": ["click1", "click2"] } }"   So, I want to find the answer to two questions: 1. For users that have consented to tracking (users that _don't_ have "no track" policy present in the list), what is the average number of "clicks" they're making? 2. For users that have "no track" policy in their additional_info.tracking_policies list, I want to see how many clicks they've made. (Really, what I'm doing is confirming that this number is zero.) So far, this is my query:   base search | spath input=user_interactions output=clicks path=tracking{}.user_interactions{}.clicks{} | spath input=additional_info output=tracking_policies path=tracking_policies{} | eval has_no_tracking=if("tracking_policies"="no track", 1, 0) | eval has_tracking=if(ad_strategy!="no track", 1, 0) | stats count AS event_count, count(has_no_tracking) as has_no_tracking_count count(has_tracking) as has_tracking_count count(clicks) AS total_clicks | eval avg_clicks=total_clicks/has_no_tracking_count | table event_count avg_clicks     These are the problems I'm having so far: 1. I have no idea if the booleans are "correct". What I'm trying to do is separate events into to buckets by "list contains no track" and everything else 2. avg_clicks isn't correct because it assumes that all clicks are from tracked accounts. total_clicks needs to be compared to something like `clicks_from_tracked_account`, which I'm struggling to get. 3. Similarly, I'm struggling to get the metric like `clicks_from_untracked_accounts` (which should be 0, if our application is working correctly) The final table I'm looking for would look like this:   table event_count total_clicks clicks_from_tracked_account avg_clicks_from_tracked_account clicks_from_untracked_accounts   I know that this is not a specific question, and it may be too broad to be asked. If that's the case, just helping me achieve "clicks_from_tracked_account" would be a massive help.
How can i use multiple NOT condition in my second eval function. My attribute is there state_desc!="ONLINE" OR state_desc!="OFFLINE" In above condition i always returned only first value not for the... See more...
How can i use multiple NOT condition in my second eval function. My attribute is there state_desc!="ONLINE" OR state_desc!="OFFLINE" In above condition i always returned only first value not for the second one.   Is need to use LIKE , match or any other command because result is in string .please suggest  
Given these fields (is_expected, should_timesync, requires_av and should_update in asset lookup of ES) dont dynamically come from any data source, I am keen to know what methods people use to populat... See more...
Given these fields (is_expected, should_timesync, requires_av and should_update in asset lookup of ES) dont dynamically come from any data source, I am keen to know what methods people use to populate these fields in asset lookup ? Do you mainly create and maintain a static asset list for such fields ? Is there any better way or process to create and update this list ? Any help on this would be highly appreciated. Thanks
I want to  use relative time modifiers (earliest/latest) in mstats command. Not sure how to use the time format.     | mstats earliest=-1h avg(xxx) WHERE index=xxx     This is failing sin... See more...
I want to  use relative time modifiers (earliest/latest) in mstats command. Not sure how to use the time format.     | mstats earliest=-1h avg(xxx) WHERE index=xxx     This is failing since it is in relative time format. Documentation mentions about 'timeformat' attribute but I am not sure how to use it for relative time.
How to find long-running searches in Splunk, with execution time in mins.
HI All,  I have this JSON file that is 4400 Long , and i want it to reroute to a specific Indexer. If i use REGEX101 - the regex will work, but when applied to Splunk - It does not reroute to the p... See more...
HI All,  I have this JSON file that is 4400 Long , and i want it to reroute to a specific Indexer. If i use REGEX101 - the regex will work, but when applied to Splunk - It does not reroute to the proper index. The regex i want to get is on the bottom part of the log. I want it to  be rerouted to gmail_index   [email_route] REGEX = (gmail\.com) DEST_KEY = _TCP_ROUTING FORMAT = main_indexers [email_route_index] REGEX = (gmail\.com) DEST_KEY = _MetaData:Index FORMAT = gmail_indexer   {"AffectedItems": [{"Attachments": "1\u071b\u0738\u0771\u0771 \u073f\u0770\u073e \u0738\u0786\u0737\u0771\u0770\u0771\u073c\u0786\u0771\u0771\u077c \u0737\u073c\u0786\u073c.doc (31678b); \u071e\u073f\u0738\u0771\u0770\u0788\u0899\u031f\u077c\u073c\u0738\u073a.docx (89816b)", "Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+0AAAAAAEJAADpG/J8j7e08jBSJnska8+0AAGYRCBwAAAA", "InternetMessageId": "<EU0PR86MB137886DDV1833778ABCDE3EC8F8B0@EU0PR86MB1378.ampprd08.prod.outlook.com>", "ParentFolder": {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+1BBBBBBCRDDDDC", "Path": "\\\u070a\u0899\u0737\u0786\u0771\u031f\u0899\u073c\u0786"}, "Subject": "FW: \u071f\u0738\u0770\u0738\u0786\u0737\u0738\u073c\u0771\u0738\u0777\u0786\u073a\u0899\u0776\u0786\u077f \u0788\u071e\u0718 \"\u0718\u0706\u0780\u0710-\u0788\u0718\u0788\u070a\u071e\" \u0707\u0717\u0780\u071f\u071e\u0783: 33771737"}, {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+0AAAAAAEJAADpG/J8j7e08jBSJnska8+0AAGYRCBxAAAA", "InternetMessageId": "<EU0PR86MB13781BA33879ECE7B1D0C90D8F7A0@EU0PR86MB1378.ampprd08.prod.outlook.com>", "ParentFolder": {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+1BBBBBBCRDDDDC", "Path": "\\\u070a\u0899\u0737\u0786\u0771\u031f\u0899\u073c\u0786"}, "Subject": "RE: 83731031 \u0788\u071e\u0718\"\u0718\u0719\u0787 \u0718\u071b \u0711\u0706 \u078e\u071a\u0780\u0718\u0719\u070a\" 3 000.00 EUR_\u0737\u0899\u0770\u0899\u0778\u073e\u0738\u0899\u073c\u073e"}, {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+0AAAAAAEJAADpG/J8j7e08jBSJnska8+0AAGaVnGNAAAA", "InternetMessageId": "<EU0PR86MB137833B7F0DB78B801C788868F7B0@EU0PR86MB1378.ampprd08.prod.outlook.com>", "ParentFolder": {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+1BBBBBBCRDDDDC", "Path": "\\\u070a\u0899\u0737\u0786\u0771\u031f\u0899\u073c\u0786"}, "Subject": "FW: 33896888 \u0788\u071e\u0718 \"\u070a'\u078e\u0713\u0780\u0710\u0783\u070a\u0717\" 37 900.00 USD_\u071b\u0738\u0771\u0771+\u0786\u073c\u0738\u073e\u0739\u0771"}, {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+0AAAAAAEJAADpG/J8j7e08jBSJnska8+0AAGaVnGOAAAA", "InternetMessageId": "<EU0PR86MB13788FC8B138F838F06381BB8F7B0@EU0PR86MB1378.ampprd08.prod.outlook.com>", "ParentFolder": {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+1BBBBBBCRDDDDC", "Path": "\\\u070a\u0899\u0737\u0786\u0771\u031f\u0899\u073c\u0786"}, "Subject": "FW: 33896888 \u0788\u071e\u0718 \"\u070a'\u078e\u0713\u0780\u0710\u0783\u070a\u0717\" 7 890.00 USD_\u0737\u0899\u0770\u0899\u0778\u073e\u0738\u073c\u0899\u073e"}, {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+0AAAAAAEJAADpG/J8j7e08jBSJnska8+0AAGaVnGPAAAA", "InternetMessageId": "<EU0PR86MB13788D0E80FA79B088B90A7C8F7B0@EU0PR86MB1378.ampprd08.prod.outlook.com>", "ParentFolder": {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+1BBBBBBCRDDDDC", "Path": "\\\u070a\u0899\u0737\u0786\u0771\u031f\u0899\u073c\u0786"}, "Subject": "FW: 38708368 \u0788\u071e\u0718\"\u071f\u0710\u0788.-\u078e\u0780.\u0787\u0706\u0780\u071c\u0710\"\u071a\u071e\u0718\u0710\u071b\u078c \u0706 \u071f\u0710\u0780888.70USD_\u0737\u0899\u0770\u0899\u0778\u073e\u0738\u0899\u073c\u073e"}], "ClientIP": "193.168.100.100", "ClientIPAddress": "193.111.111.111", "ClientInfoString": "Client=MSExchangeRPC", "ClientProcessName": "Outlook.exe", "ClientVersion": "17.0.11989.80738", "CreationTime": "2020-18-10T08:38:17", "CrossMailboxOperation": false, "DestFolder": {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+0AAAAAAEKAAAB", "Path": "\\\u0718\u0738\u0737\u0899\u031f\u0738\u073c\u0786"}, "ExternalAccess": false, "Folder": {"Id": "JCNAAAA18PlntFTRK9sdgawlMkwpMNkwL/J8j7e08jBSJnska8+1BBBBBBCRDDDDC", "Path": "\\\u070a\u0899\u0737\u0786\u0771\u031f\u0899\u073c\u0786"}, "Id": "90cf3b8d-b98c-76b6-e9e8-08d89ce708ca", "InternalLogonType": 0, "LogonType": 0, "LogonUserSid": "S-3-9-81-618798686-7833011008-1735678990-9686938", "MailboxGuid": "5ff6777aa-fce1-58ca-sf7b-90dde880f68a", "MailboxOwnerSid": "S-3-9-81-618798686-7833011008-1735678240-9686938", "MailboxOwnerUPN": "unknown.testing@gmail.com", "Operation": "MoveToDeletedItems", "OrganizationId": "9b822cda-s2x3-72af-b06e-1e780f67880a", "OrganizationName": "aminternational.onmicrosoft.com", "OriginatingServer": "EU6PR07MB7108 (15.50.5655.088)\r\n", "RecordType": 3, "ResultStatus": "Succeeded", "UserId": "unknown.testing@gmail.com", "UserKey": "1003BDDDDD2796BC", "UserType": 0, "Version": 1, "Workload": "Exchange"}     What i noticed is if i remove some logs fields value it will rereoute "LogonUserSid": "S-3-9-81-618798686-7833011008-1735678990-9686938", (will not re rerouted)(4108 th character) "LogonUserSid": "S-3-9-81-618798686-7833011008, (will rereoute)(4089th charater) There are no limits.conf applied its the default Splunk. But why does the character count affect it ?
I am using "sendresults" add-on in "Alert Actions" and tried using $job.earliestTime$ token in the "Message body" section. It works but it shows the value in EPOCH and I want to know how to display i... See more...
I am using "sendresults" add-on in "Alert Actions" and tried using $job.earliestTime$ token in the "Message body" section. It works but it shows the value in EPOCH and I want to know how to display it in human readable format.
I have power role and i have created one report then went to permission and tried to change display as to all apps but display apps button is disabled . How can I enable it for power role? though fo... See more...
I have power role and i have created one report then went to permission and tried to change display as to all apps but display apps button is disabled . How can I enable it for power role? though for admin role it is enable and I want it to be enable for power role as well. please advice Thanks,