All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, My requirement is to get time range of exact same length what i get from time picker. Suppose if i select range of 3 hours i.e. 3PM to 6PM . then my requirement is to get the data of 12PM to... See more...
Hi All, My requirement is to get time range of exact same length what i get from time picker. Suppose if i select range of 3 hours i.e. 3PM to 6PM . then my requirement is to get the data of 12PM to 3PM.  what actually I am trying to do is, I have some count of events in time and for selected time span i am getting correct count but what i need is to get data of the time span of same length but  but previous time stamp and show both data span data on time chart.  Please Help.. Thanks 
Hi  Everyone, I have one requirement I have one Dashboard which consists of two panels Request Types and Users Query for Request_Type  Panel index=abc sourcetype=xyz source="user.log" process-gro... See more...
Hi  Everyone, I have one requirement I have one Dashboard which consists of two panels Request Types and Users Query for Request_Type  Panel index=abc sourcetype=xyz source="user.log" process-groups |rex "\)\s+(?<Request_Type>[^ ]+)"|chart count(Request_Type) as "Request- Types" by Request_Type |search $req$ Query for Users Panel index=abc sourcetype=xyz source="user.log" process-groups | rex "\<(?<Request_User>\w+)\>\<"|chart count(Request_User) as "Users" by Request_User|search $usr$ I have two dropdowns also in the same Dashboard for Request_Type and Users Query for Request_Type dropdown <input type="multiselect" token="req" searchWhenChanged="true"> <label>Request Type</label> <choice value="*">All Request_Type</choice> <search> <query>index=abc sourcetype=xyz source="user.log" process-groups | rex "\)\s+(?&lt;Request_Type&gt;[^ ]+)"|stats count by Request_Type </query> <earliest>-60d@d</earliest> <latest>now</latest> </search> <fieldForLabel>Request_Type</fieldForLabel> <fieldForValue>Request_Type</fieldForValue> <prefix>(</prefix> <valuePrefix>Request_Type ="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <suffix>)</suffix> <initialValue>*</initialValue> <default>*</default> </input> Query for User dropdown <input type="multiselect" token="usr" searchWhenChanged="true"> <label>NiFi_Users</label> <choice value="*">All Users</choice> <search> <query>index=abc sourcetype=xyz source="user.log" process-groups | rex "\&lt;(?&lt;Request_User&gt;\w+)\&gt;\&lt;"|stats count by Request_User</query> <earliest>-60d@d</earliest> <latest>now</latest> </search> <fieldForLabel>Request_User</fieldForLabel> <fieldForValue>Request_User</fieldForValue> <prefix>(</prefix> <valuePrefix>Request_User ="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <suffix>)</suffix> <initialValue>*</initialValue> <default>*</default> </input> The issue I am facing is when I am selecting "PUT" from Request type dropdown then I am getting correct data  Iin Request_Type panel but not in User panel.  But in user panel all the users are coming. I want only the "users" that are associated with "PUT" If I select "PUT"  from the  Request_Type drop down. If I select "GET" then all the users associated with "GET "should come. Since I have extracted "Request_type" field the "req" token is not working properly in Users panel. Query for Users Panel index=abc sourcetype=xyz source="user.log" process-groups | rex "\<(?<Request_User>\w+)\>\<"|chart count(Request_User) as "Users" by Request_User|search $usr$
I would like to configure splunk to read files stored in a inbound folder.  These files are written 4x day, but could be up to 10X day.  They files are sent from vendors, and contain a "status".  The... See more...
I would like to configure splunk to read files stored in a inbound folder.  These files are written 4x day, but could be up to 10X day.  They files are sent from vendors, and contain a "status".  The status field will be used to create a report which will be emailed to users. The "inbound" folder is used for a number of vendors, so the log file name will be used to separate the vendor data, based on the file name.   Date                               File                                            Records       Status  12/24/2020 0800  log_file_vendorA_122920200800.status 1200   Filed 12/25/2020 1200  log_file_vendorA_122920200800.status 1200   Acknowledgment Sent 12/29/2020 0800  log_file_vendorA_122920200800.status 1200   Acknowledged
Hello All,  I'm having an issue where I am unable to create new correlation searches. I get the following error: There was an error saving the correlation search: In handler 'savedsearch': Data c... See more...
Hello All,  I'm having an issue where I am unable to create new correlation searches. I get the following error: There was an error saving the correlation search: In handler 'savedsearch': Data could not be written: /nobody/SplunkEnterpriseSecuritySuite/savedsearches/Threat Also, the existing searches are not running nor showing up in ES.
After enabling data integrity control for an index, I cannot find a way to generate hashes for existing buckets. `./splunk generate-hash-files` returns `Error: Cannot generate hash files for the buc... See more...
After enabling data integrity control for an index, I cannot find a way to generate hashes for existing buckets. `./splunk generate-hash-files` returns `Error: Cannot generate hash files for the bucket with path=/bucket/path, Reason=Journal has no hashes.` Rebuilding the buckets with: `./splunk fsck repair` does not generate hashes for the old buckets. Is this possible? Or are all buckets that were created before this option was enabled impossible to validate, even after the feature has been enabled?
Please I need help with my splunk query below. My query below is only showing just one security metric based on my comparison. I have about 160 security metrics total but only seeing just one. Please... See more...
Please I need help with my splunk query below. My query below is only showing just one security metric based on my comparison. I have about 160 security metrics total but only seeing just one. Please I need your help with  a splunk query that will show all the 160 security metrics based on my comparison below:   index=security  source=base_ad_metric_test_v3 earliest=-1y base_ad_metric>0 | stats avg(base_ad_metric) AS avg stdev(base_ad_metric) AS stdev min(base_ad_metric) AS min max(base_ad_metric) AS max latest(base_ad_metric) AS latest_count BY Metric_ID | eval min_thres=5000, max_thres=7500 | eval is_above_thres=if(latest_count>max_thres, 1, 0) | eval is_below_thres=if(latest_count<min_thres, 1, 0) | eval data_item_volatility=case(is_above_thres==1, "High", is_below_thres==1, "Low", true(), "normal") | lookup free_metrics.csv Metric_ID output Data_Item_volatility AS spreadsheet_Data_Item_volatility Operating_System_Metric_Calculation AS spreadsheet_Operating_System_Metric_Calculation Metric_Name AS spreadsheet_Metric_Name
What i am trying to accomplish is forcing the scheduler to dispatch a scheduled saved search throgh REST in order to update it's cached results. here's what i have working so far: 1 scheduled r... See more...
What i am trying to accomplish is forcing the scheduler to dispatch a scheduled saved search throgh REST in order to update it's cached results. here's what i have working so far: 1 scheduled report with search: | makeresults Count=1 | eval embed = "embeded report" | eval argumento = $args.argument$ | table embed, argumento   Then I execute: curl -k -u admin:pass https://localhost:8089/servicesNS/admin/Admin_Tools/saved/searches/embed_report/dispatch -d args.argument=1 And get the response: <?xml version="1.0" encoding="UTF-8"?> <response> <sid>admin__admin_QWRtaW5fVG9vbHM__RMD5e57d89b9c983845a_at_1609434492_13436</sid> </response> With the sid i can see that the argument was passed correctly: curl -k -u admin:pass https://localhost:8089/services/search/jobs/admin__admin_QWRtaW5fVG9vbHM__RMD5e57d89b9c983845a_at_1609434492_13436/results <?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder> <field>embed</field> <field>argumento</field> </fieldOrder> </meta> <result offset='0'> <field k='embed'> <value><text>embeded report</text></value> </field> <field k='argumento'> <value><text>1</text></value> </field> </result> </results> The thing is i want to have this report embbeded in a web app. From this app, perform a request to the REST API with this argument and have the embbeded report updated. However, the embbeded report shows the last cached results which are saved in a job with a name like: scheduler__nobody_XXXX-XXXX-XXXXXX Any ideas how this can be achieved? Thank you.
I've been trying tirelessly to get this to work on Ubuntu 20.  My process so far: 1. Install Splunk with the deb package.  Seems to work just fine.   2. Login to Splunk and install the eStreamer eN... See more...
I've been trying tirelessly to get this to work on Ubuntu 20.  My process so far: 1. Install Splunk with the deb package.  Seems to work just fine.   2. Login to Splunk and install the eStreamer eNcore.  No issues here. 3. Enable all the data inputs file and scripts. No issues here. 4. Jump to the CLI and attempt to get into the /opt/splunk/etc/apps/TA-eSteamer directory.  Turns out splunk installed this but its root:root.  I changed it to splunk:splunk and 755 like all the other apps.  DOesn't appear to cause any harm and lets me in. 5. Edit the splencore.sh for the home directory. 6. Copy in the client.pkcs12 and  7. Run the sudo ./splencore.sh test. 8. Run the commands for the openssl that .splencore.sh says to run.  No issues here.  Generates the files in the encore directory with the IP of the FMC. 9. Run the sudo ./splencore.sh test again.    Here is where I get the error I can not fix or get past.  Below you will see I'm using the pyton2.7 where the latest splunk uses python 3.7. I changed this in the .splencore.sh pybin var because I saw others stating 2.7 was needed.  It however didn't fix anything for me.      ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/opt/splunk/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/opt/splunk/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha1 Traceback (most recent call last): File "./estreamer/preflight.py", line 34, in <module> import estreamer.crossprocesslogging File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/__init__.py", line 28, in <module> from estreamer.connection import Connection File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/connection.py", line 23, in <module> import ssl File "/opt/splunk/lib/python2.7/ssl.py", line 98, in <module> import _ssl # if we can't import it, let the error propagate ImportError: libssl.so.1.0.0: cannot open shared object file: No such file or directory       Any help would be appreciated.  I've rebuilt this thing so many times and tried everything I can think of. 
Hi, I have a weird reaction of my dropdown input :  It seems the input create a empty value in the dropdown choice whereas when I run the input request, the result is giving just ID list :  ... See more...
Hi, I have a weird reaction of my dropdown input :  It seems the input create a empty value in the dropdown choice whereas when I run the input request, the result is giving just ID list :  Here is my xml code :  <row> <panel> <title>ID list</title> <input type="dropdown" token="id_tok" searchWhenChanged="true"> <label>Select an ID to delete</label> <fieldForLabel>id</fieldForLabel> <fieldForValue>id</fieldForValue> <search> <query>| inputlookup id_list.csv | fields id | dedup id</query> <earliest>-1m@m</earliest> <latest>now</latest> </search> <prefix>"</prefix> <suffix>"</suffix> </input> <table> <search> <query>| makeresults | eval ID=$id_tok$ | fields - _time | dedup ID | outputlookup append=t id_delete.csv</query> <earliest>-1m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> Beside, my panel request does not seem to work : " Error in 'eval' command: Failed to parse the provided arguments. Usage: eval dest_key = expression."  The fact is that when I open the search, the token value is not set...  :      Can you help me ?  Thanks !
I have a dashboard that just shows a table of the results. Query is built with parameters taken from TextBox. I would like to take them from the DatTime Picker but it seems to apply only to regular... See more...
I have a dashboard that just shows a table of the results. Query is built with parameters taken from TextBox. I would like to take them from the DatTime Picker but it seems to apply only to regular indexed events. Any way to just use it insert SQL formatted DateTime to my dbxquery?    
Dear All, I am getting data from the Search head in json format. The first field of the event is timestamp and it is in epoch time format("timestamp": 1609414219738696) with 16 digits. My problem... See more...
Dear All, I am getting data from the Search head in json format. The first field of the event is timestamp and it is in epoch time format("timestamp": 1609414219738696) with 16 digits. My problem is i need to onboard data with _time value from timestamp field. So in props.conf file of Cluster master i updated as below TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %s%6N But the _time field is not populated properly . And i am getting 2 values in indexed data for timestamp field as below. Please help me on this 
Morning All, I've setup several internal lookup files and made them part of an Intelligence download. I've added in lookup definitions for each file so Splunk can programatically read them as well. ... See more...
Morning All, I've setup several internal lookup files and made them part of an Intelligence download. I've added in lookup definitions for each file so Splunk can programatically read them as well. However, Im getting an Invalid Threatlist Stanza error - any thoughts on how to investigate these at all? My only thought is the Threatlist stanza only looks for certain column names within the scope of the lookup file?  The columns available in these internal lookups contain the following: category type Event.info comment value Country timestamp count Intelligence Downloads _time stanza disabled type url weight exit_status download_status manager_status run_duration 2020-12-31 10:10:04 222 0 threatlist lookup://222.csv 1 0   Invalid threatlist stanza. 0.0
Hi, I have a table like that :  id name app env 123 test1 [app]:my_app [env]:my_env 456 test2 [env]:my_env [app]:my_app   My issue is  is that the values ​​of the header env ... See more...
Hi, I have a table like that :  id name app env 123 test1 [app]:my_app [env]:my_env 456 test2 [env]:my_env [app]:my_app   My issue is  is that the values ​​of the header env and app are mixed ... How to put [app] values ​​in "app" column and [env] values ​​in "env" column ? Is this possible ? Thanks !
All Splunk Apps are installed on Linux Servers and we will apply OS patch. And i have 3 Indexers, 4 Search Heads, 1 Deployment Server and 1 Heavy Forwarder  (an Indexer Cluster is integrates with a ... See more...
All Splunk Apps are installed on Linux Servers and we will apply OS patch. And i have 3 Indexers, 4 Search Heads, 1 Deployment Server and 1 Heavy Forwarder  (an Indexer Cluster is integrates with a SHC) Could you please advise me what should i stop first or can i stop them at the same time?  will i loss Data in this Activity?
Hi! Don't find UF for FreeBSD. Are this subject exist?
  in my standalone environment i have configured like this...i want to know about volume . and also if l ll set coldToFrozenDir=$SPLUNK_DB/ ind1/frozendb that ll  create folder externally???  [ind... See more...
  in my standalone environment i have configured like this...i want to know about volume . and also if l ll set coldToFrozenDir=$SPLUNK_DB/ ind1/frozendb that ll  create folder externally???  [ind1] homePath= $SPLUNK_DB/ind1/db coldPath= $SPLUNK_DB/ind1/colddb thawedPath= $SPLUNK_DB/ind1/thaweddb maxHotBuckets=10 maxDataSize=10000 maxWarmDBCount=300 maxTotalDataSizeMB=200000 frozenTimePeriodInSecs=31536000  coldToFrozenDir=$SPLUNK_DB/ ind1/frozendb  
Hi ,  Based on your suggestion I prepared queries for two different apps as below.  Now I need to combine these two and get a single stats table. Stats table like as :  jId  Applname       diff ... See more...
Hi ,  Based on your suggestion I prepared queries for two different apps as below.  Now I need to combine these two and get a single stats table. Stats table like as :  jId  Applname       diff   ASNumber - StNumber -  count xy     app1              23        983723                                   2 uw    app2             98                                377813            1 Query 1: |rex field=_raw "ApplicationName:\s+\[(?P<Applname>.*)];" |rex field=_raw "jobId: (?<jId>\w+);" |rex field=_raw "\<ASNumber\>(?<ASNumber>[^\<]+)\<[^\<]" |eventstats count(jId) as jIdcount by ASNumber |where jIdcount > 1 |stats range(_time) as diff, first(ASNumber) as ASNumber, count(ASNumber) as count by jId,Applname Query 2: |rex field=_raw "ApplicationName:\s+\[(?P<Applname>.*)];"  |rex field=_raw "jobId: (?<jId>\w+);" |rex field=_raw "StNumber\":\"(?P<StNumber>.[^\"\,\"]*)" |eventstats count(jId) as jIdcount by StNumber |where jIdcount > 1 |stats range(_time) as diff, first(StNumber) as StNumber,count(StNumber) as count by jId,Applname
Hey Everyone, I have the data being sent into the defined index via Kenesis. When I load the dashboard nothing populates. Upon searching the query, it returns this:   `aws-vpc-flow-log-index` sour... See more...
Hey Everyone, I have the data being sent into the defined index via Kenesis. When I load the dashboard nothing populates. Upon searching the query, it returns this:   `aws-vpc-flow-log-index` source="dest_port" vpcflow_action=ACCEPT protocol=* (interface_id="*") (aws_account_id="*") | dedup _time interface_id aws_account_id protocol vpcflow_action | stats sum(total_bytes) as total_bytes sum(total_packets) as total_packets by interface_id aws_account_id protocol vpcflow_action | stats dc(interface_id) as interfaces which returns a value of 0, however if I remove source="dest_port"  the query responds correctly and returns the correct variable. I must be doing something wrong here.
Hello Everyone, Good Evening and Happy Holidays! I have a tricky question (I think it is kind of tricky) about tokenization. I am working on a dashboard that has Tabs and each tab has subtabs, th... See more...
Hello Everyone, Good Evening and Happy Holidays! I have a tricky question (I think it is kind of tricky) about tokenization. I am working on a dashboard that has Tabs and each tab has subtabs, the problem comes with the conditions and how the dashboard is running, somehow the tokens are not following the conditions. What I mean with this is that the default choices are running their queries even when the selected tab is a different one. i.e: My default tab is "Tab 1" and it defaults to "Tab 1 - Sub Tab 1" but "Tab 2" & "Tab 2 Sub Tab 1" and "Tab 3" & "Tab 3 Sub Tab 1" are running their queries when the panels are not visible. Here's a snippet of my xml:   <form> <label>SODVAL - Zero MI Validation</label> <fieldset submitButton="false"> <input id="major_tabs" type="link" token="major_tabs"> <label>Choose a view</label> <choice value="tab_1">Tab 1</choice> <choice value="tab_2">Tab 2</choice> <choice value="tab_3">Tab 3</choice> <default>tab_1</default> <change> <condition value="tab_1"> <set token="tab_1">true</set> <unset token="tab_2"></unset> <unset token="tab_3"></unset> </condition> <condition value="tab_2"> <set token="tab_2">true</set> <unset token="tab_1"></unset> <unset token="tab_3"></unset> </condition> <condition value="tab_3"> <set token="tab_3">true</set> <unset token="tab_2"></unset> <unset token="tab_1"></unset> </condition> </change> </input> </fieldset> <row depends="$tab_1$"> <panel> <input id="tab1_subtabs" type="link" token="tab1_subtabs"> <choice value="tab1_subTab_1">Tab 1</choice> <choice value="tab1_subTab_2">Tab 2</choice> <choice value="tab1_subTab_3">Tab 3</choice> <default>tab1_subTab_1</default> <change> <condition value="tab1_subTab_1"> <set token="tab1_subTab_1">true</set> <unset token="tab1_subTab_2"></unset> <unset token="tab1_subTab_3"></unset> </condition> <condition value="tab1_subTab_2"> <set token="tab1_subTab_2">true</set> <unset token="tab1_subTab_1"></unset> <unset token="tab1_subTab_3"></unset> </condition> <condition value="tab1_subTab_3"> <set token="tab1_subTab_3">true</set> <unset token="tab1_subTab_2"></unset> <unset token="tab1_subTab_1"></unset> </condition> </change> </input> <chart depends="$tab1_subTab_1$"> <search depends="$tab1_subTab_1$"> <query> *some query* </query> </search> </chart> <chart depends="$tab1_subTab_1$"> <search depends="$tab1_subTab_1$"> <query> *some query* </query> </search> </chart> <chart depends="$tab1_subTab_2$"> <search depends="$tab1_subTab_2$"> <query> *some query* </query> </search> </chart> <chart depends="$tab1_subTab_2$"> <search depends="$tab1_subTab_2$"> <query> *some query* </query> </search> </chart> <chart depends="$tab1_subTab_3$"> <search depends="$tab1_subTab_3$"> <query> *some query* </query> </search> </chart> <chart depends="$tab1_subTab_3$"> <search depends="$tab1_subTab_3$"> <query> *some query* </query> </search> </chart> </panel> </row> <row depends="$tab_2$"> <panel> <input id="tab2_subtabs" type="link" token="tab2_subtabs"> <choice value="tab2_subTab_1">Tab 1</choice> <choice value="tab2_subTab_2">Tab 2</choice> <choice value="tab2_subTab_3">Tab 3</choice> <default>tab2_subTab_1</default> <change> <condition value="tab2_subTab_1"> <set token="tab2_subTab_1">true</set> <unset token="tab2_subTab_2"></unset> <unset token="tab2_subTab_3"></unset> </condition> <condition value="subTab_2"> <set token="subTab_2">true</set> <unset token="subTab_1"></unset> <unset token="subTab_3"></unset> </condition> <condition value="tab2_subTab_3"> <set token="tab2_subTab_3">true</set> <unset token="tab2_subTab_2"></unset> <unset token="tab2_subTab_1"></unset> </condition> </change> </input> <chart depends="$tab2_subTab_1$"> <search depends="$tab2_subTab_1$"> <query> *some query* </query> </search> </chart> <chart depends="$tab2_subTab_1$"> <search depends="$tab2_subTab_1$"> <query> *some query* </query> </search> </chart> <chart depends="$tab2_subTab_2$"> <search depends="$tab2_subTab_2$"> <query> *some query* </query> </search> </chart> <chart depends="$tab2_subTab_2$"> <search depends="$tab2_subTab_2$"> <query> *some query* </query> </search> </chart> <chart depends="$tab2_subTab_3$"> <search depends="$tab2_subTab_3$"> <query> *some query* </query> </search> </chart> <chart depends="$tab2_subTab_3$"> <search depends="$tab2_subTab_3$"> <query> *some query* </query> </search> </chart> </panel> </row> <row depends="$tab_3$"> <panel> <input id="tab3_subtabs" type="link" token="tab3_subtabs"> <label>tab3_subtabs</label> <choice value="tab3_subTab_1">Tab 1</choice> <choice value="tab3_subTab_2">Tab 2</choice> <choice value="tab3_subTab_3">Tab 3</choice> <default>tab3_subTab_1</default> <change> <condition value="tab3_subTab_1"> <set token="tab3_subTab_1">true</set> <unset token="tab3_subTab_2"></unset> <unset token="tab3_subTab_3"></unset> </condition> <condition value="tab3_subTab_2"> <set token="tab3_subTab_2">true</set> <unset token="tab3_subTab_1"></unset> <unset token="tab3_subTab_3"></unset> </condition> <condition value="tab3_tab2_subTab_3"> <set token="tab3_tab2_subTab_3">true</set> <unset token="tab3_tab2_subTab_2"></unset> <unset token="tab3_tab2_subTab_1"></unset> </condition> </change> </input> <chart depends="$tab3__subTab_1$"> <search depends="$tab3__subTab_1$"> <query> *some query* </query> </search> </chart> <chart depends="$tab3__subTab_1$"> <search depends="$tab3__subTab_1$"> <query> *some query* </query> </search> </chart> <chart depends="$tab3__subTab_2$"> <search depends="$tab3__subTab_2$"> <query> *some query* </query> </search> </chart> <chart depends="$tab3__subTab_2$"> <search depends="$tab3__subTab_2$"> <query> *some query* </query> </search> </chart> <chart depends="$tab3__subTab_3$"> <search depends="$tab3__subTab_3$"> <query> *some query* </query> </search> </chart> <chart depends="$tab3__subTab_3$"> <search depends="$tab3__subTab_3$"> <query> *some query* </query> </search> </chart> </panel> </row> </form>   My question is: Am I doing something wrong? and Is there a way to check or avoid this behavior?  Note: My actual dashboard has more code and queries but is basically the same structure, This snippet is replicating the issue. Note 2: Sometimes one or two of the panels don't even load. I appreciate all your help, Thanks in advance! 
We a situation where we are exchanging data between OTM (Oracle Transportation Management) and SAP. Middleware is Dell Boomi. SPLUNK is being used as a monitoring tool. OTM is on SaaS cloud hence SPL... See more...
We a situation where we are exchanging data between OTM (Oracle Transportation Management) and SAP. Middleware is Dell Boomi. SPLUNK is being used as a monitoring tool. OTM is on SaaS cloud hence SPLUNK can not look into OTM log files and database to monitor the transactions. I wanted to know how does SPLUNK do the monitoring with SaaS applications. Thanks