All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   Looking for a suggestion/query to monitor the triggered alerts of one particular search head (one Splunk URL) using another Splunk Search Head (another splunk URL) With 4 fields included _t... See more...
Hi,   Looking for a suggestion/query to monitor the triggered alerts of one particular search head (one Splunk URL) using another Splunk Search Head (another splunk URL) With 4 fields included _time, Alert Name, Mail notifications, results
I have many agent versions and each row is displayed as the different version... Like the query is telling it to do.   I need help in the sense of  would like to truncate evey period and digit to t... See more...
I have many agent versions and each row is displayed as the different version... Like the query is telling it to do.   I need help in the sense of  would like to truncate evey period and digit to the single version digit. 6.3.0.0 6.2.1 7.3.3 7.21 To look like this: 6  7 
I have been trying for 2 days to  get the proper syntax for get the UF agent version along with the RHEL os_release  to be displayed as a report. I can get each separately but not together .... Can... See more...
I have been trying for 2 days to  get the proper syntax for get the UF agent version along with the RHEL os_release  to be displayed as a report. I can get each separately but not together .... Can anyone help ?
I am using DBconnect to pull data in database then the setup would be RISING.  Using SQL to select data from database. There are 3 dates from database column and I will convert to epoch time then the... See more...
I am using DBconnect to pull data in database then the setup would be RISING.  Using SQL to select data from database. There are 3 dates from database column and I will convert to epoch time then the highest epoch time will be created as column (MaxDate) to be use as RISING COLUMN.  Questions: May i know if the MaxDate will work in RISING COLUMN? If yes, I encountered "java.sql.SQLException: Parameter #1 has not been set." is there anyone could help?  Or other ways cause our only choice is to use the date max to be able to pull the updated one. Thanks
So I’m pretty new to splunk and I do feel like this should be a lot simpler than I’m making it. I need two epoch times that are in the same cell to be substracted from each other and I haven’t been ... See more...
So I’m pretty new to splunk and I do feel like this should be a lot simpler than I’m making it. I need two epoch times that are in the same cell to be substracted from each other and I haven’t been able to find anything that can help with it or figure it out myself. I didn't want to use mvexpand because I want the subtraction to be based off of the user   My search result looks like this rn: Name Epoch UserA 1625037039 1625037045 UserB 1625050381 1625050423  
hi all, I have a file that i want to monitor on the Heavy Forwarder HF which is the Deployment Server DS at the same time. Since a deployment server cannot be a client of itself I place the manuall... See more...
hi all, I have a file that i want to monitor on the Heavy Forwarder HF which is the Deployment Server DS at the same time. Since a deployment server cannot be a client of itself I place the manually created app to  /opt/splunk/etc/apps/appname/default/inputs.conf Now after i reconfigure the inputs. After some time the updated gets randomly removed again. As if it is a deployed app. My question now. What is the best practice to monitor local files on a HF/DS?    Best, O.
Hi. I am upgrading from 8.1.0 to 8.2.1. I received the bundle replication issue as below: Problem replicating config (bundle) to search peer ' 10.150.x.x:8089 ', Upload bundle="/opt/splunk/var/run/... See more...
Hi. I am upgrading from 8.1.0 to 8.2.1. I received the bundle replication issue as below: Problem replicating config (bundle) to search peer ' 10.150.x.x:8089 ', Upload bundle="/opt/splunk/var/run/SHD01-1625054310.bundle" to peer name=IND01 uri=https://10.150.x.x:8089 failed; http_status=409 http_description="Conflict". I received an error for each member of indexer cluster. My search head is a standalone server. Could anyone please help? Linh
I am having trouble with my current deployment in using UFs on all laptops and having a UF(intermediate Forwarder) in our DMZ to then Forward up to cloud.  The reason for this is we want all laptops ... See more...
I am having trouble with my current deployment in using UFs on all laptops and having a UF(intermediate Forwarder) in our DMZ to then Forward up to cloud.  The reason for this is we want all laptops to be able to reach the UF in the DMZ even when not on VPN ( using SSL).  The only ports open on the UF in the DMZ from the internet are 8089 and 9997, the DMZ UF can send outbound on those ports as well. ##UF outputs.conf on Laptops## [tcpout:server group] server = sing server address:9997 # SSL SETTINGS #clientCert = $SPLUNK_HOME/etc/apps/server group_outputs/auth/server.pem sslCertPath = $SPLUNK_HOME/etc/apps/server group_outputs/auth/UF_INT_server.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/100_splunkcloud/default/UF_INT_cacert.pem useClientSSLCompression=true What should the inputs.conf file look like to establish the connection between the UF(laptop) and Intermediate Forwarder and then the outputs.conf file look like to establish the connection between the Intermediate Forwarder and the cloud all using SSL.  Can both inputs and ouputs use port 9997 as well?  
Hello, I would like to know the enhancements and features of Splunk 8.1.1 versus Splunk 8.0.8. May I know what are their features and bug fixes. i would also like to know what are their advantages a... See more...
Hello, I would like to know the enhancements and features of Splunk 8.1.1 versus Splunk 8.0.8. May I know what are their features and bug fixes. i would also like to know what are their advantages against one another. Thank you
Threat and UTM dashboards are not displaying any data. Data models for UTM and constraint are specified however the dashboard show nothing. Anyone experience this? 
Hi , My wish to get the difference between yesterday and todays Pass % and fail % for different sourcetypes . I have tried as below , index=x1 sourcetype=y1 OR sourcetype=y2 OR sourcetype=y3 OR so... See more...
Hi , My wish to get the difference between yesterday and todays Pass % and fail % for different sourcetypes . I have tried as below , index=x1 sourcetype=y1 OR sourcetype=y2 OR sourcetype=y3 OR sourcetype=y4 OR sourcetype=y5 OR sourcetype=y6 OR sourcetype=y7 earliest=-48h@h latest=-24h@h|chart count over sourcetype by Status|addtotals|eval "dbyest Pass %"=round((Pass/Total)*100,3) |eval "dbyest Fail %"=round((Fail/Total)*100,3) |stats count by sourcetype "dbyest Pass %" "dbyest Fail %" |rename Total as "Total Scope"|eval repo="dbyest" |append [search index=x1 sourcetype=y1 OR sourcetype=y2 OR sourcetype=y3 OR sourcetype=y4 OR sourcetype=y5 OR sourcetype=y6 OR sourcetype=y7 earliest=-48h@h latest=-24h@h|chart count over sourcetype by Status|addtotals|eval "yest Pass %"=round((Pass/Total)*100,3) |eval "yest Fail %"=round((Fail/Total)*100,3) |stats count by sourcetype "yest Pass %" "yest Fail %" |rename Total as "Total Scope"|eval repo="yest"] |eval "Pass % diff"=round("dbyest Pass %"-"yest Pass %")|table "dbyest Pass %" "dbyest Fail %" "yest Pass %" "yest Fail %" "Pass % diff" When i ran this , i am getting the error "Error in 'eval' command: Type checking failed. '-' only takes numbers." Please help me in this to get the difference ?
Hi Guys, We use 3 Search Heads (cluster-linux boxes) with 2 Deployment boxes (1-PROD, 1-QA, Win 2012R2-32GB RAM Each) as searchpeer.  All the other servers listed under distsearch.conf of SH are li... See more...
Hi Guys, We use 3 Search Heads (cluster-linux boxes) with 2 Deployment boxes (1-PROD, 1-QA, Win 2012R2-32GB RAM Each) as searchpeer.  All the other servers listed under distsearch.conf of SH are linux boxes. We constantly get messages on our search head - ""Unable to distribute to peer named XXXXXXXX at uri=XXXXXXXXXX:8089 using the uri-scheme=https because peer has status=Down. Verify uri-scheme, connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information."" AND "Problem replicating config (bundle) to search peer 'XXXXXXX', Upload bundle="/SPLUNK/splunk/var/run/54C7554E-300C-462E-A82D-6AE880CB89BF-1624948028.bundle" to peer name=XXXXXXX uri=https://XXXXXXX:8089 failed; http_status=400 http_description="Failed to untar the bundle="D:\Splunk\var\run\searchpeers\54C7554E-300C-462E-A82D-6AE880CB89BF-1624948028.bundle". This could be due Search Head attempting to upload the same bundle again after a timeout. Check for sendRcvTimeout message in splund.log, consider increasing it."." This happens only with the 2 Win-Deployment boxes. Linux boxes do not throw such alerts ever... My question is are both issues interrelated? The state of these 2 servers often go from UP to DOWN on the Search peer UI on the Search Head. Troubleshooting details below which we tried but did not work- 1. We have tried removing them and adding them again from the GUI and the distsearch.conf and authenticating them again. 2. In distsearch.conf on SH- [replicationSettings] sendRcvTimeout = 240 3.Size of SH bundle is about 125MB which is not huge.... Not sure what needs to be done here. Any help would be appreciated........ Hoping for a quick fix on this. Thanks for your help.....
I have a field that's called file_content on an source type. This has a CSV inside. Meaning every event has a field (file_content) that has a csv inside it. Every event is an email Can't be field e... See more...
I have a field that's called file_content on an source type. This has a CSV inside. Meaning every event has a field (file_content) that has a csv inside it. Every event is an email Can't be field extraction as the "file_content" is really hard to find inside the data. I used the regex query to extract the data, and I have 2 issues with it,   1. field TicketNumber has both, but my regex ignores the " and I get number2 on the next column - ," number , number2 ", - ,number, 2. It's very slow as I get 1 CSV per hour every day. so i wonder if there is any automation or a better way to do this First i get all lines: | makemv delim=" " file_content | mvexpand file_content | table file_content _time Then I get the regex per line | rex field=file_content "(?P<ContactId>[^\s,]+),(?P<Customernumber>[^\s,]+),(?P<AfterContactWorkDuration>[^\s,]+),(?P<AfterContactWorkEndTimestamp>[^\s,]+),(?P<AfterContactWorkStartTimestamp>[^\s,]+),(?P<AgentInteractionDuration>[^\s,]+),(?P<ConnectedToAgentTimestamp>[^\s,]+),(?P<CustomerHoldDuration>[^\s,]+),(?P<Hierarchygroups_Level1_GroupName>[^\s,]+),(?P<Hierarchygroups_Level2_GroupName>[^\s,]+),(?P<Hierarchygroups_Level3_GroupName>[^\s,]+),(?P<LongestHoldDuration>[^\s,]+),(?P<NumberOfHolds>[^\s,]+),(?P<Routingprofile>[^\s,]+),(?P<Agent>[^\s,]+),(?P<AgentConnectionAttempts>[^\s,]+),(?P<ConnectedToSystemTimestamp>[^\s,]+),(?P<DisconnectTimestamp>[^\s,]+),(?P<InitiationMethod>[^\s,]+),(?P<InitiationTimestamp>[^\s,]+),(?P<LastUpdateTimestamp>[^\s,]+),(?P<NextContactId>[^\s,]+),(?P<PreviousContactId>[^\s,]+),(?P<DequeueTimestamp>[^\s,]+),(?P<Duration>[^\s,]+),(?P<EnqueueTimestamp>[^\s,]+),(?P<Name>[^\s,]+),(?P<TransferCompletedTimestamp>[^\s,]+),(?P<HandleTime>[^\s,]+),(?P<TicketNumber>[^\s,]+),(?P<Account>[^\s,]+),(?P<AccountName>[^\s,]+),(?P<Country>[^\s,]+),(?P<Language>[^\s,]+),(?P<Site>[^\s,]+),(?P<WrapCode>[^\s,]+)" And here is an example of how the data should look like (in csv):   ContactId,Customernumber,AfterContactWorkDuration,AfterContactWorkEndTimestamp,AfterContactWorkStartTimestamp,AgentInteractionDuration,ConnectedToAgentTimestamp,CustomerHoldDuration,Hierarchygroups_Level1_GroupName,Hierarchygroups_Level2_GroupName,Hierarchygroups_Level3_GroupName,LongestHoldDuration,NumberOfHolds,Routingprofile,Agent,AgentConnectionAttempts,ConnectedToSystemTimestamp,DisconnectTimestamp,InitiationMethod,InitiationTimestamp,LastUpdateTimestamp,NextContactId,PreviousContactId,DequeueTimestamp,Duration,EnqueueTimestamp,Name,TransferCompletedTimestamp,HandleTime,TicketNumber,Account,AccountName,Country,Language,Site,WrapCode aaaa-xxxxxx,123456789,90,29/06/2021 01:00,29/06/2021 01:00,111,29/06/2021 01:00,0,country1,xx,yy,90,90,language,dummy,1,29/06/2021 01:00,29/06/2021 01:00,type_x,29/06/2021 01:00,29/06/2021 01:00,,,29/06/2021 01:00,11,29/06/2021 01:00,type_y,29/06/2021 01:00,201,A123,xxx,xxx,country_y,language,type_w,xxxx bbbb-xxxxxx,987654321,90,29/06/2021 01:00,29/06/2021 01:00,111+P4,29/06/2021 01:00,0,country1,xx,yy,90,90,language,dummy,1,29/06/2021 01:00,29/06/2021 01:00,type_x,29/06/2021 01:00,29/06/2021 01:00,,,29/06/2021 01:00,11,29/06/2021 01:00,type_y,29/06/2021 01:00,201,"""A123,B123""",xxx,xxx,country_y,language,type_w,xxxx  
Hi Team, We noticed that every time a Indexer is restarted, the search head and the Indexer itself pops up with a message "Search peer xxxxxINDEXERxxxx has the following message: The metric value=15... See more...
Hi Team, We noticed that every time a Indexer is restarted, the search head and the Indexer itself pops up with a message "Search peer xxxxxINDEXERxxxx has the following message: The metric value=15884.664 provided for source=/splunk/splunk/var/log/introspection/resource_usage.log, sourcetype=splunk_intro_resource_usage, host=xxxHEAVY_FORWARDERxxxx, index=_metrics is not a floating point value. Using a "numeric" type rather than a "string" type is recommended to avoid indexing inefficiencies. Ensure the metric value is provided as a floating point number and not as a string. For instance, provide 123.001 rather than "123.001"." We have been simply ignoring this till now. Can some one help? I have no idea where to begin... All our Search Heads (3-in cluster) and total 5 Indexers (Prod&QA -non clustered) are linux boxes... Hoping for a quick reply..... Thanks......
Hello everyone , Please can anyone help me out since last Friday 6/25 (or maybe earlier actually), some of our team members suddenly could not access Splunk search page. Team member: kandai.a.kit... See more...
Hello everyone , Please can anyone help me out since last Friday 6/25 (or maybe earlier actually), some of our team members suddenly could not access Splunk search page. Team member: kandai.a.kitamura@rakuten.com takahiro.ito@rakuten.com Splunk search page: https://splunk.intra.rakuten-it.com/en-US/app/prov/search They got 404 after SSO. But other team members are still OK to access the search page. For example: ling.cao@rakuten.com
Hi everyone. I need an help to insert in a Modal View one splunk visualization contained in Event Timeline App. I wrote this code with splunk dev help ticketModalTest.js require([     'under... See more...
Hi everyone. I need an help to insert in a Modal View one splunk visualization contained in Event Timeline App. I wrote this code with splunk dev help ticketModalTest.js require([     'underscore',     'backbone',     '../app/Optimus/components/ModalViewTest',     'splunkjs/mvc',     'splunkjs/mvc/searchmanager',     'splunkjs/mvc/simplexml/ready!' ], function(_, Backbone, ModalView, mvc, SearchManager) {          var ticket_modal_1 = mvc.Components.get('ticket_modal_1');     var tokens = mvc.Components.getInstance("submitted");     var detailSearch7 = new SearchManager({         id: "detailSearch7",         earliest_time: "0",         latest_time: "now",         preview: true,         cache: false,         search: "index=optimus_execution_statistics_idx | stats latest(script_name) as script_name latest(execution_status) as execution_status latest(execution_started) as execution_started latest(execution_ended) as execution_ended latest(category_name) as category_name latest(execution_host) as execution_host by _time execution_id |   where _time = $tok_snapshot_earliest$ | eval execution_started_unix = strptime(execution_started, \"%Y-%m-%d %H:%M:%S\")| sort limit=0 - execution_started| where _time >= relative_time(now(), \"-2d@d\")| stats values(execution_ended) as execution_ended values(eval(execution_id.\"|\".execution_host.\"|\".execution_started.\"|\".execution_ended)) as fullid by execution_started execution_id execution_host _time script_name execution_status| streamstats max(execution_ended) as execution_ended by execution_id | sort 0 _time| streamstats time_window=1h values(eval(execution_id.\"|\".execution_host.\"|\".execution_started.\"|\".execution_ended.\"|\".script_name)) as prev_fullid| mvexpand prev_fullid| rex field=prev_fullid \"(?<prev_execution_id>.*)\\|(?<prev_execution_host>.*)\\|(?<prev_execution_started>.*)\|(?<prev_execution_ended>.*)\\|(?<prev_script_name>.*)\"| fields - fullid prev_fullid| table execution_started execution_ended execution_id execution_host *| eval overlap = if(execution_started >= prev_execution_started AND execution_started <= prev_execution_ended, 1, 0)| eval cat = prev_execution_id| eval execution_started_unix = strptime(execution_started, \"%Y-%m-%d %H:%M:%S\")| eval execution_ended_unix = strptime(execution_ended, \"%Y-%m-%d %H:%M:%S\")| eval end=if(execution_status=\"In Progress\",now(),execution_ended_unix)| eval exclude=if(execution_started_unix = (relative_time(now(),\"@d\")-1) AND execution_ended_unix=(execution_started_unix-1) AND execution_status=\"Closed\",1,0)| eval start = execution_started_unix| eval color = case(execution_status=\"Closed\" and exclude=\"0\",\"#4FA484\", execution_status=\"Error\",\"#AF575A\", execution_status=\"Restarted\",\"#EC9960\", execution_status=\"In Progress\",\"#006d9c\")| eval group = if(len(substr(prev_execution_host, 8, len(prev_execution_host))) = 1, \"FCACLIP0\" + substr(prev_execution_host, 8, len(prev_execution_host)), \"FCACLIP\" + substr(prev_execution_host, 8, len(prev_execution_host)))| strcat cat \"|\" prev_script_name cat| eval label=cat | sort group| eval tooltip=cat | eval label=null|dedup tooltip | table label start end group color tooltip"     },{tokens: true, tokenNamespace: "submitted"});     ticket_modal_1.on("click", function(e) {         e.preventDefault();         if(e.data['click.name'] === '_time') {             var tok_snapshot_earliest= e.data['click.value'];             tokens.set('tok_snapshot_earliest',tok_snapshot_earliest);             var _title = tok_snapshot_earliest +"Panels"             var modal = new ModalView({ title: _title, search7: detailSearch7 });             modal.show();         }     }); }); ModalViewTest.js  define([     'underscore',     'backbone',     'jquery',     'splunkjs/mvc',     'splunkjs/mvc/searchmanager',     'splunkjs/mvc/simplexml/element/table',     'splunkjs/mvc/visualizationregistry'     ], function(_, Backbone, $, mvc, SearchManager, TableElement, VisualizationRegistry) {         var modalTemplate = "<div id=\"pivotModal2\" class=\"modal\">" +                        "<div class=\"modal-header\"><h3><%- title %></h3><button class=\"close\">Close</button></div>" +                        "<div class=\"modal-body\"></div>" +                        "<div class=\"modal-footer\"></div>" +                        "</div>" +                        "<div class=\"modal-backdrop\"></div>";         var ModalView = Backbone.View.extend({                  defaults: {                title: 'Not set'             },                          initialize: function(options) {                 this.options = options;                 this.options = _.extend({}, this.defaults, this.options);                 this.childViews = [];                 console.log('Hello from the modal window: ', this.options.title);                 this.template = _.template(modalTemplate);             },                          events: {                'click .close': 'close',                'click .modal-backdrop': 'close'             },                  render: function() {                 var data = { title : this.options.title };                 this.$el.html(this.template(data));                 return this;             },                  show: function() {                  $(document.body).append(this.render().el);                   $(this.el).find('.modal-body').append('<div id="modalVizualizationTest"></div>');                                 $(this.el).find('.modal').css({                         width:'90%',                         height:'80%',                         position: 'fixed',                         right: '0',                         left: '0',                        'margin-right': 'auto',                         'margin-left': 'auto',                         height: '80vh',                         'overflow-y': 'auto',                 });                 var customViz = VisualizationRegistry.getVisualizer('event-timeline-viz','event-timeline-viz');                 var search7 = mvc.Components.get(this.options.search7.id);                 var myViz = new customViz({                     id: "myViz",                     managerid: search7.name,                     drilldown: "none",                     el: $('modalVizualizationTest'),                     showPager: false,                     "link.openSearch.visible": "false",                     "link.exportResults.visible": "false",                     "link.inspectSearch.visible": "false",                     "refresh.link.visible": "false",                     "showLastUpdated": false                 }).render();                 this.childViews.push(myViz);                 search7.startSearch(); },                  close: function() {                this.unbind();                this.remove();                _.each(this.childViews, function(childView) {                                        childView.unbind();                    childView.remove();                                    });             }              });                  return ModalView; }); Any suggestion about why this code doesn't work? Thanks
Hello, After Splunk_TA_snow upgrade from 6.0.0. to 6.4.1 we have an issue with data collection. When we try to reenter password for snow account in the add-on, we keep getting:  “Unable to reach s... See more...
Hello, After Splunk_TA_snow upgrade from 6.0.0. to 6.4.1 we have an issue with data collection. When we try to reenter password for snow account in the add-on, we keep getting:  “Unable to reach server at https://xyz.service-now.com. Check configurations and network settings.” We upgraded the add-on to the newest version (7.0.0) and tried out some solutions from https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Troubleshooting#Troubleshoot_the_Splunk_Add-on_for_ServiceNow but still cannot update password in snow account. Also, we have noticed following socket timeout errors: 2021-06-29 16:47:03,365 ERROR pid=67397 tid=Thread-2 file=snow_data_loader.py:_do_collect:298 | Failure occurred while connecting to https://xyz.service-now.com/api/now/table/cmdb_ci?sysparm_display_value=all&sysparm_offset=0&sysparm_limit=1000&sysparm_exclude_reference_link=true&sysparm_query=sys_updated_on>=2020-12-15+21:08:41^sys_updated_on<2021-06-29+14:46:01^ORDERBYsys_updated_on,sys_id. The reason for failure=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_data_loader.py", line 254, in _do_collect "Authorization": "Basic %s" % credentials File "/opt/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1959, in request File "/opt/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1622, in _request File "/opt/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1528, in _conn_request File "/opt/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1309, in connect socket.timeout: timed out Do you have any idea how to solve that?
Hi, I have created a lookup table file via GUI, in the backend it is saved under /opt/splunk/etc/apps/search/lookups Then I created a lookup definition for the same lookup (which I used in the look... See more...
Hi, I have created a lookup table file via GUI, in the backend it is saved under /opt/splunk/etc/apps/search/lookups Then I created a lookup definition for the same lookup (which I used in the lookup table files) now I am unable to find an entry for this lookup definition at the backend. Can you please let me know in which location I should search for the lookup definition entry?
I will try to map Splunk Enterprise Alerts Logs to Splunk Security Essentials for Mitre Attack. But mitre Tactic and Techniques doen't map. So any solution for that. I am not use Splunk Enterprise S... See more...
I will try to map Splunk Enterprise Alerts Logs to Splunk Security Essentials for Mitre Attack. But mitre Tactic and Techniques doen't map. So any solution for that. I am not use Splunk Enterprise Security For Mitre attack. If any solution on Splunk Enterprise. please let me know. Thanks Vatsal Shah
Hi All, Good Day!! This is an Splunk Phantom Architecture question, which we are in the intial stage of building the Splunk Phantom and considering C1E+ netwrok topology(snip attached) as we have e... See more...
Hi All, Good Day!! This is an Splunk Phantom Architecture question, which we are in the intial stage of building the Splunk Phantom and considering C1E+ netwrok topology(snip attached) as we have external Splunk instance (both Indexer & Search head) but the Splunk is on Cloud (saas product) 1. My question is would it support for building the Splunk Phantom with out Splunk embedded instace(which would be part of Splunk Architecture)? as we have external Splunk instance which is on cloud. 2. What is the major functionality of Splunk embedded here in the Phantom Architechture? 3. Any barriers/ issues to build with Phantom infrastructure without Splunk embedded? 4. What is the version of Phantom would support to build without Splunk embedded. Looking for response from your end, which will help us alot to have a cosnsistent environment. Regards, Yeswanth M.