All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Looking for some assistance in reconstructing my query, which is currently using | transaction with a traceId value to tie together a couple different sourcetypes/sources. My query runs real... See more...
Hello, Looking for some assistance in reconstructing my query, which is currently using | transaction with a traceId value to tie together a couple different sourcetypes/sources. My query runs really slow, some of the sourcetype log results number in the 200million range so looking to speed it up using | stats by <traceId> instead to get the query to run faster. First source example snippet shows the highlighted traceId and 404 response code i am looking for. time=2021-12-11T23:59:51-07:00 time_ms=2021-12-11T23:59:51-07:00.620+ requestId=-1796576042 traceId=-1796576042 servicePath="/nationalnavigation/" remoteAddr=x.x.x.x clientIp=x.x.x.xclientAppVersion=NOT_AVAILABLE clientDeviceType=NOT_AVAILABLE app_version=- apiKey=somekey oauth_leg=2-legged authMethod=oauth apiAuth=true apiAuthPath=/ oauth_version=1.0 target_bg=default requestHost=services.timewarnercable.com requestPort=8080 requestMethod=GET requestURL="/nationalnavigation/V1/symphoni/event/tmsid/blah.com::TVNF0321206000538347?division=FTWR&lineup=15&profile=sg_v1&cacheID=959&longAdvisory=false&vodId=fort_worth&tuneToChannel=false&watchLive=true&watchOnDemand=true&rtReviewsLimit=0&includeAdult=f" requestSize=835 responseStatus=404 responseSize=420 responseTime=0.405 userAgent="Java/1.xxx" mapTEnabled="F" cClientIp="V-1|IP-x.x.x.x|SourcePort-12345|TrafficOriginID-x.x.x.x" sourcePort="12345" appleEgressEnabled="F" oauth_consumer_key="somekey" x_pi_auth_failure="-" pi_log="pi_ngxgw_access" second source example shows the REST server logs with an exception. 2021-12-11 23:59:51,261 ERROR [qtp1647496677-7239] [-1796576042] [c.t.a.n.r.s.r.s.SymphoniRestServiceBroker.handleNnsServiceErrorHeaders:1363] An internal service error occurred: com.twc.atgw.nationalnavigation.SymphoniWebException: Event Not Found Here's the current query i am looking to improve.     index=vap sourcetype=nns_all OR sourcetype=pi_ngxgw_access "nationalnavigation.SymphoniWebException: Event Not Found" OR "responseStatus=404" | rex "\] \[(?<traceId>.+)\] \[c.t.a.n.r.s.r.s" | transaction keepevicted=true by traceId | search "nationalnavigation.SymphoniWebException: Event Not Found" AND "responseStatus=404" | mvexpand requestURL | search requestURL="/nationalnavigation/V1/symphoni/series/tmsproviderprogramid*" OR "/nationalnavigation/V1/symphoni/event/tmsid*" | eval requestURLLength=len(requestURL) | rex field=requestURL "/nationalnavigation/V1/symphoni/event/tmsid/.*\%3A\%3A(?<queryString>.+)" | eval endpoint=case(match(requestURL,"/nationalnavigation/V1/symphoni/series/tmsproviderprogramid*"), "/nationalnavigation/V1/symphoni/series/tmsproviderprogramid", match(requestURL,"/nationalnavigation/V1/symphoni/event/tmsid*"), "/nationalnavigation/V1/symphoni/event/tmsid",1=1,requestURL) | rex field=queryString "(?<tmsIds>[^?]*)" | rex field=queryString "(?<tmsProviderProgramIds>[^?]*)" | eval assetIds=coalesce(tmsIds,tmsProviderProgramIds) | eval assetCount=mvcount(split(assetIds,",")) | stats count AS TxnCount by endpoint      
Background: I'm working on a form that associates Qualys vulnerability IDs with CVE IDs. I'm leveraging two lookup tables, one Qualys ID centric, the other CVE ID  centric. It's in a 3 panel form wi... See more...
Background: I'm working on a form that associates Qualys vulnerability IDs with CVE IDs. I'm leveraging two lookup tables, one Qualys ID centric, the other CVE ID  centric. It's in a 3 panel form with only one initially visible. If a CVE is clicked in the initial panel, the additional two panels become visible; because there is often a series of CVE IDs associated with a single QID, one panel returns results for the clicked CVE and the other panel returns results for all CVEs in that QID. Initial Pane:     | inputlookup qid-cve.csv | fillnull | search TITLE="$title$" QID=$qid$ CVE=*$cve$* VENDOR_REFERENCE=$vr$ CATEGORY=$category$ | makemv delim=", " CVE       Drilldown Pane 1:     | inputlookup cve-details.csv | rename Name as CVE | table CVE Description References Votes Comments | search CVE=$form.cve$       And the Pane of Problems:     | inputlookup cve-details.csv | rename Name as CVE | table CVE Description References Votes Comments filter | eval filter="$cve_list$" | eval filter=replace(filter, "(CVE\-\d{4}\-\d+)\,?", " OR CVE=\"\1\"") | eval filter=replace(filter, "^ OR ", "") | where CVE=filter     What I'm looking for: Take a comma separated field containing all of the CVEs in a QID and join them together with an " OR CVE=\"$_\""' and directly interpret that as spl passed to a where command Note that CVE's contain heiphens, so in a where that'll make the string be interpretted as subtraction eval when unquoted, so quoting the CVEs is definitely part of the solution. Here's an example of what I need the spl to look like       | inputlookup "cve-details.csv" | rename Name as CVE | table CVE, Description, Refreences, Votes, Comments | where CVE="CVE-2020-13543" OR CVE="CVE-2021-13543" OR CVE="CVE-2020-13584" OR CVE="CVE-2021-13584" OR CVE="CVE-2020-9948" OR CVE="CVE-2021-9948" OR CVE="CVE-2020-9951" OR CVE="CVE-2021-9951" OR CVE="CVE-2020-9983" OR CVE="CVE-2021-9983"       where the variable being expanded holds the string:       "CVE-2020-13543" OR CVE="CVE-2021-13543" OR CVE="CVE-2020-13584" OR CVE="CVE-2021-13584" OR CVE="CVE-2020-9948" OR CVE="CVE-2021-9948" OR CVE="CVE-2020-9951" OR CVE="CVE-2021-9951" OR CVE="CVE-2020-9983" OR CVE="CVE-2021-9983"       Problem: That final panel, the one that returns data for all CVEs in a QID, is proving quite difficult. My query looks like this:     | inputlookup cve-details.csv | rename Name as CVE | table CVE Description References Votes Comments | eval filter=replace("$cve_list$", "(CVE\-\d{4}\-\d+)\,?", " OR CVE=\"\1\"") | eval filter=replace(filter, "^ OR CVE=", "") | where CVE=filter     The query looks like this once it's optimized:     | inputlookup "cve-details.csv" | rename Name as CVE | table CVE, Description, References, Votes, Comments, filter | eval filter=" OR CVE=\"CVE-2021-3587\" OR CVE=\"CVE-2021-3573\" OR CVE=\"CVE-2021-3564\" OR CVE=\"CVE-2021-3506\" OR CVE=\"CVE-2021-3483\" OR CVE=\"CVE-2021-33034\" OR CVE=\"CVE-2021-32399\" OR CVE=\"CVE-2021-31916\" OR CVE=\"CVE-2021-31829\" OR CVE=\"CVE-2021-29650\" OR CVE=\"CVE-2021-29647\" OR CVE=\"CVE-2021-29264\" OR CVE=\"CVE-2021-29155\" OR CVE=\"CVE-2021-29154\" OR CVE=\"CVE-2021-28971\" OR CVE=\"CVE-2021-28964\" OR CVE=\"CVE-2021-28688\" OR CVE=\"CVE-2021-26930\" OR CVE=\"CVE-2021-23134\" OR CVE=\"CVE-2021-23133\" OR CVE=\"CVE-2021-0129\" OR CVE=\"CVE-2020-29374\" OR CVE=\"CVE-2020-26558\" OR CVE=\"CVE-2020-26147\" OR CVE=\"CVE-2020-26139\" OR CVE=\"CVE-2020-25672\" OR CVE=\"CVE-2020-25671\" OR CVE=\"CVE-2020-25670\" OR CVE=\"CVE-2020-24588\" OR CVE=\"CVE-2020-24587\" OR CVE=\"CVE-2020-24586\"", filter=replace(filter,"^ OR CVE=","") | where (CVE == filter)     How can I convince where to stop looking at filter as a string literal? I've even added it to my table results before so it would have a better chance of looking at it as a field.  That did not work, naturally.
Hi there, I've set up a dashboard with various columns, one of them outputs a  number field which has a comma(,) in it. I can remove the comma using the following command rex field=SurveyAnswers mod... See more...
Hi there, I've set up a dashboard with various columns, one of them outputs a  number field which has a comma(,) in it. I can remove the comma using the following command rex field=SurveyAnswers mode=sed "s/\,//g"  where SurveyAnswers is the table name. This works fine in a separate search, however the same command doesn't work when I try to update it in my dashboard and save. Any ideas ??? Thanks
Hello, I have a net 5 WebAPI deployed in a docker linux container. I want to instrument the application in AppDynamics. The application works and I test it everytime with swagger I've tried 2 impl... See more...
Hello, I have a net 5 WebAPI deployed in a docker linux container. I want to instrument the application in AppDynamics. The application works and I test it everytime with swagger I've tried 2 implementations Install the .NET Core Microservices Agent for Windows -> https://docs.appdynamics.com/4.5.x/en/application-monitoring/install-app-server-agents/net-agent/net-microservices-agent/install-the-net-core-microservices-agent-for-windows .NET Core for Linux SDK -> https://docs.appdynamics.com/21.5/en/application-monitoring/install-app-server-agents/net-agent/net-core-for-linux-sdk The first scenario didn't work for me I got a consistent error, a single log line saying "use clr profiler",  The second does nothing at all I have no feedback what so ever and no log is created This is my Docker configuration "Docker": { "commandName": "Docker", "launchBrowser": true, "launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}/Swagger", "environmentVariables": { "CORECLR_PROFILER": "{57e1aa68-2229-41aa-9931-a6e93bbc64d8}", "CORECLR_ENABLE_PROFILING": "1", "CORECLR_PROFILER_PATH": "/app/bin/Debug/net5.0/runtimes/linux-64/native/libappdprofiler.so", "APPDYNAMICS_LOG_PATH": "/app/bin/Debug/net5.0" }, "publishAllPorts": true, "useSSL": true } And this is the app dynamics configuration { "feature": [ "FULL_AGENT" ], "controller": { "host": "MYHOST.saas.appdynamics.com", "port": 443, "account": "myAccount", "password": "mypassword", "ssl": true, "enable_tls12": true }, "application": { "name": "myapplicationname", "tier": "my-appliation-tier", "node": "" }, "log": { "directory": "/app/bin/Debug/net5.0", "level": "ALL" } }
Would love some guidance about what to look for in the logs
I configured the okta identity cloud for splunk App to ingest okta logs into splunk but getting the error message below: 2021-12-21 16:34:35,586 ERROR pid=1375 tid=MainThread file=base_modinput.py:l... See more...
I configured the okta identity cloud for splunk App to ingest okta logs into splunk but getting the error message below: 2021-12-21 16:34:35,586 ERROR pid=1375 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/modular_input/checkpointer.py", line 218, in get record = self._collection_data.query_by_id(key) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/client.py", line 3648, in query_by_id return json.loads(self._get(UrlEncoded(str(id))).body.read().decode('utf-8')) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/client.py", line 3618, in _get return self.service.get(self.path + url, owner=self.owner, app=self.app, sharing=self.sharing, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 1244, in request raise HTTPError(response) solnlib.packages.splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- KV Store is initializing. Please try again later. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/okta_identity_cloud.py", line 68, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/input_module_okta_identity_cloud.py", line 774, in collect_events lastTs = helper.get_check_point((cp_prefix + ":" + opt_metric + ":lastRun")) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 521, in get_check_point return self.ckpt.get(key) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/utils.py", line 159, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/modular_input/checkpointer.py", line 222, in get 'Get checkpoint failed: %s.', traceback.format_exc(e)) File "/opt/splunk/lib/python3.7/traceback.py", line 167, in format_exc return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain)) File "/opt/splunk/lib/python3.7/traceback.py", line 121, in format_exception type(value), value, tb, limit=limit).format(chain=chain)) File "/opt/splunk/lib/python3.7/traceback.py", line 508, in __init__ capture_locals=capture_locals) File "/opt/splunk/lib/python3.7/traceback.py", line 337, in extract if limit >= 0: TypeError: '>=' not supported between instances of 'HTTPError' and 'int'
Hello,  My Splunk query an API and gets a JSON answer. Here is a sample for 1 Host (the JSON answer is very long ≈ 400 hosts) :  {     "hosts": [         {             "hostInfo": {          ... See more...
Hello,  My Splunk query an API and gets a JSON answer. Here is a sample for 1 Host (the JSON answer is very long ≈ 400 hosts) :  {     "hosts": [         {             "hostInfo": {              "displayName": "host1.fr"                                    },              "modules": [                 {                     "moduleType": "JAVA",                     "instances": [                         {                             "Instance Name": "Test1",                             "moduleVersion": "1.0"                         },                         {                             "Instance Name": "Test2",                             "moduleVersion": "1.1"                         },                         {                             "Instance Name": "Test3",                             "moduleVersion": "1.2"                         }                       ]                   }               ]           }        ] }   First-of-all I have to manually parse this JSON because SPLUNK automatically gets the 1st fields of the 1st host only.   With this following search, I manually parse this JSON all the way through the "instances{}" array and I count the number of moduleVersion : index="supervision_software" source="API" earliest=-1m | spath path=hosts{}.modules{}.instances{} output=host | fields - _raw | mvexpand host | spath input=host | stats count(moduleVersion) It displays a number of 1277 moduleVersion and it is the right number. On the other hand with the next similar search, when I parse the JSON starting only to the 1st array ("hosts{}"), I am getting a different number of moduleVersion  : index="supervision_software" source="API" earliest=-1m | spath path=hosts{} output=host | fields - _raw | mvexpand host | spath input=host | stats count(modules{}.instances{}.moduleVersion) It displays a number of 488 moduleVersion but it's incorrect. Why is there a difference ? Thank you. Best regards,
I'm trying to figure out how to show uptime percent of a device in percentage over 30 days that is agnostic to both linux and windows data.   I am currently using index=os sourcetype=Unix:Uptime ... See more...
I'm trying to figure out how to show uptime percent of a device in percentage over 30 days that is agnostic to both linux and windows data.   I am currently using index=os sourcetype=Unix:Uptime as my data set, and it's a default data set that ships with the Linux TA.  for windows I am using this search: index=wineventlog LogName=System EventCode=6013 |rex field=Message "uptime is (?<uptime>\d+) seconds" | eval Uptime_Minutes=uptime/60 | eval LastBoot=_time-uptime | convert ctime(LastBoot) | eval uptime=tostring(uptime, "duration") | stats latest(_time) as time by host, Message, uptime, LastBoot   Currently, I can't figure out how to account for a reboot that occurs during the month.  The linux data doesn't have a 'LastBoot' field like the windows data, and I'm not sure how to create one.  This is the closest that I've gotten is to use something like this for either linux or windows, and simply rename / create the 'uptime' field in seconds.  index=nix sourcetype=Unix:Uptime | rename SystemUpTime as uptime | streamstats sum(uptime) as total by host | eval tot_up=(total/157697280)*100 | eval host_uptime=floor(tot_up) | stats max(host_uptime) as pctUp by host This is obviously crude, and I'm trying to refine it though i'm looking for any help. I'm obviously missing something, and i'm sure i'm not the first person to ask a question like this though I couldn't find anything specific to this on answers.  I have a search that shows me total uptime in duration for either windows or linux, and that's great!  I'm just looking for the total uptime in percent over a 30 days span that accounts for reboots, or legitimate system hard down incidents. 
We've gotten a search to work that shows the delta between the number of messages in an inbox for a period of time:   <basesearch> | bin _time span=5m | stats max(Items) AS Max by _time | delta Max... See more...
We've gotten a search to work that shows the delta between the number of messages in an inbox for a period of time:   <basesearch> | bin _time span=5m | stats max(Items) AS Max by _time | delta Max as Delta | fields _time Delta | where Delta>=10   But I want to do this based on multiple inboxes, and delta is merging the inboxes together, so the values of each inbox are interfering with each other.   <basesearch multiple mailboxes> | bin _time span=5m | stats max(Items) AS Max by _time User | delta Max as Delta | fields _time Delta User   returns: _time User Max Delta 09:15 user1 103   09:15 user2 251 148 09:15 user3 17 -234 and I want the users to be treated as individual accounts, not merged with each other.  I assume I need to use streamstats for this, but so far I've been unable to work out how.
Learning about joins and sub searches. What's the following query executing and would there be a way to make it more efficient? index=old_indexstats count values(d) as d by username | join type=in... See more...
Learning about joins and sub searches. What's the following query executing and would there be a way to make it more efficient? index=old_indexstats count values(d) as d by username | join type=inner username [search index=new_index | stats count by username ] I believe it starts by searching and counting usernames in the new index however, am getting mixed up after that.
Hi All, I have a code, that uses the output to fetch data from another Panel. First Panel   <title>Juniper Mnemonics</title> <table> <search> <query>index=nw_syslog | sear... See more...
Hi All, I have a code, that uses the output to fetch data from another Panel. First Panel   <title>Juniper Mnemonics</title> <table> <search> <query>index=nw_syslog | search hostname="*DCN*" | stats count by cisco_mnemonic, hostname | sort - count</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <condition field="cisco_mnemonic"> <set token="message_token">$click.value$</set> </condition> <condition field="hostname"> <set token="hostname_token">$click.value$</set> </condition> <condition field="count"></condition> </drilldown> </table>     From this panel 2 contents are fetched for second panel search. Second Panel   index=nw_syslog | search hostname="*DCN*" | search cisco_mnemonic="$message_token$" | search hostname="$hostname_token$" | stats count by message | sort - count     Issue:  When ever i click the first panel table.( given ROW as Click Selection). its not getting fetching correctly. Only fetching "cisco_mnemonic" only for both cisco_mnemonic and hostname. Please guide me how can i get both in single click.    
I have 2 sourcetypes, vpn & winevents, how do you write a single query to get winevents of the top 5 busiest machines of IP X (1 IP is used by many users). The vpn sourcetype contains both hostname &... See more...
I have 2 sourcetypes, vpn & winevents, how do you write a single query to get winevents of the top 5 busiest machines of IP X (1 IP is used by many users). The vpn sourcetype contains both hostname & IP, while the winevents only contains the hostname. I'm assuming I'd utilize the append command and a sub search sourcetype=winevents | append [search sourcetype=vpn] | top limit=5  Any help is appreciated, thanks
We are getting the below error while executing the query.  com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.   Kindly advise on the r... See more...
We are getting the below error while executing the query.  com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.   Kindly advise on the resolution. 
Hi, I need help in evaluation the csv files under "<Splunk directory>\etc\apps\search\lookups" folder. we have multiple csv files in this folder and I need to check which csv file is not in use or u... See more...
Hi, I need help in evaluation the csv files under "<Splunk directory>\etc\apps\search\lookups" folder. we have multiple csv files in this folder and I need to check which csv file is not in use or used for which search so that unused csv file can be deleted.
Hello Team We have been using  Splunk app for Jenkins App in our environment and we have seperate dashboard for Jobinsight with in the App. With in the Jobinsight  Dashboard we have panel for "Lates... See more...
Hello Team We have been using  Splunk app for Jenkins App in our environment and we have seperate dashboard for Jobinsight with in the App. With in the Jobinsight  Dashboard we have panel for "Latest Build" which display the value for Build.  Expectation is when the value is clicked in Panel it suppose to display the build results by taking the "Click_value"  as variable. However we see that is value is not passing while clicking on the Panel. Code in the respective page:  $click.value$ value is not passing. <link>\n<![CDATA[build?type=build&host=$host$&job=$jobName$&build=$click.value$]]>\n </link> In the dashboard URL : splunk_app_jenkins/build?type=build&host=JENKINS-F1DEVOPS-PRDINTRANET-IE&job=F1_ALL_DEV_SML/f1-all-dev-sml/feature%2FGDS-641_lint_tools&build=$click.value$
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2014. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgrade... See more...
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2014. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgrade to 2017 , is it compatible with splunk db connect or do they need to upgrade it to SQL 2019 ?  Provide any solutions/documents on this .
Hi Community, I'm currently facing a concern around the Health Rule Violations API returning less information in the "description" field since my company updated the controller from version 4.5.16.2... See more...
Hi Community, I'm currently facing a concern around the Health Rule Violations API returning less information in the "description" field since my company updated the controller from version 4.5.16.2272 to 20.7.2-2909.  Here is the api result comparison between between both version and the deepLinkUrl output: 4.5.16.2272: "description": "description": "AppDynamics has detected a problem with Application <b>APP-1</b>.<br><b>Service Availability</b> continues to violate with <b>critical</b>.<br>All of the following conditions were found to be violating<br>For Application <b>APP-1</b>:<br>1) API-PORTAL<br><b>Availability's</b> value <b>2.00</b> was <b>less than</b> the threshold <b>3.00</b> for the last <b>30</b> minutes<br>" 20.7.2-2909: "description": "AppDynamics has detected a problem.<br><b>Business Transaction Health</b> is violating." I'm wondering if anything was misconfigured related to the health alert or is there a way to fine tune the alert to show a detailed version. Thank you ! 
index name = my_index source name = my_source sourcetype = my_sourcetpye host = 192.168.0.10 ----------------------------- The field action is =allow -> my_allow. Action = deny -> my_deny othe... See more...
index name = my_index source name = my_source sourcetype = my_sourcetpye host = 192.168.0.10 ----------------------------- The field action is =allow -> my_allow. Action = deny -> my_deny other -> my_myontype I want to change it to this. help me
Is it possible to send alert log from FireEye CM (central management) to FireEye App for Splunk ?