All Topics

Top

All Topics

I have SOAR installed and am trying to figure out how to make configuration changes, specifically for accessing the web interface. We are currently accessing via: https://ipaddress:8000  Overall, I ... See more...
I have SOAR installed and am trying to figure out how to make configuration changes, specifically for accessing the web interface. We are currently accessing via: https://ipaddress:8000  Overall, I am trying to find out how to make it accessible via http, if possible. Along with that, I would like to know where to make general configuration changes similar to web.conf for Splunk. I had to dig around quite a bit to discover the login.html in a templates folder just to add my server names for clarity. Any help would be greatly appreciated! 
As AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting to find sneaky and elusive threats. This tech talk shares how the Splunk Threat Hunting team seamle... See more...
As AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting to find sneaky and elusive threats. This tech talk shares how the Splunk Threat Hunting team seamlessly integrated the PEAK Threat Hunting Framework into their workflow while leveraging Splunk. See how they also joined forces with the SOC to turn non-hunters into cyber sleuths. Don't have a dedicated hunt team (or even if you do) – explore Splunk’s end-to-end processes with tips and tricks to unleash a pipeline of hunters and turn the PEAK Threat Hunting framework from a concept into a powerful tool in your organization. Watch Splunk threat hunters Sydney Marrone and Robin Burkett to learn about:   The PEAK threat hunting framework  How you can customize PEAK for your environment  How to enable your SOC analysts to be successful threat hunters Real-world examples of PEAK hunt types for actionable insights Watch the full Tech Talk here: 
I'm trying to get at the results of a phantom.act() action run, more specifically the Splunk HTTP app  "get file" action. Something as simple as:       # inside a custom code block def get_ac... See more...
I'm trying to get at the results of a phantom.act() action run, more specifically the Splunk HTTP app  "get file" action. Something as simple as:       # inside a custom code block def get_action_results(**kwargs): phantom.debug(action) phantom.debug(success) phantom.debug(results) phantom.debug(handle) return phantom.act('get file', parameters=[{'hostname': '', 'file_path': '/path/to/file'}], assets=["web_server"], callback=get_action_results)         The action will run as expected, however the callback isn't getting the results output. Am I misunderstanding callbacks in this scenario?
I want my send email action email body to be in table view as my search result. How do I pass dynamic token field value.  $result.name$ $result.index$$result.sourcetype$ how do I make fiel... See more...
I want my send email action email body to be in table view as my search result. How do I pass dynamic token field value.  $result.name$ $result.index$$result.sourcetype$ how do I make field value come side by side instead of below.  how I am getting now in my email body  name name2 name3 name4 index index2 index3 index4 sourcetype sourcetype2 sourcetype3 sourcetype4 I want to be like below. name index sourcetype name2 index2 sourcetype2 name3 index3 sourcetype3 name4 index4 sourcetype4 Is it possible to do
Hello, I am testing using SEDCMD on a single Splunk server architecture. Below is the current configuration which is put into /opt/splunk/etc/system/local/ - I am uploading a CSV file which contai... See more...
Hello, I am testing using SEDCMD on a single Splunk server architecture. Below is the current configuration which is put into /opt/splunk/etc/system/local/ - I am uploading a CSV file which contains (fake) individual data including two formats of SSN (xxx-xx-xxxx & xxxxxxxxx). The masking is not working when I upload the CSV file. Can someone help point me in the right direction? props.conf ### CUSTOM ### [csv] SEDCMD-redact_ssn = s/\b\d{3}-\d{2}-\d{4}\b/XXXXXXXXX/g   Included below is FAKE individual data pulled from the CSV file for testing: 514302782,f,1986/05/27,Nicholson,Russell,Jacki,3097 Better Street,Kansas City,MO,66215,913-227-6106,jrussell@domain.com,a,345389698201044,232,2010/01/01 505-88-5714,f,1963/09/23,Mcclain,Venson,Lillian,539 Kyle Street,Wood River,NE,68883,308-583-8759,lvenson@domain.com,d,30204861594838,471,2011/12/01
Thanks in Advance . I need to show status  If the P_RETURN_STATUS is success then it SUCCESS,IF error then ERROR ,IF P_RETURN_STATUS is error and P_MESSAGE is NO NEW BATCH EXISTS as SUCCESS .But al... See more...
Thanks in Advance . I need to show status  If the P_RETURN_STATUS is success then it SUCCESS,IF error then ERROR ,IF P_RETURN_STATUS is error and P_MESSAGE is NO NEW BATCH EXISTS as SUCCESS .But already the P_RETURN_STATUS having values as error .How to override when using AND condition     | eval P_RETURN_STATUS=case(like('P_RETURN_STATUS' ,"%SUCCESS%"),"SUCCESS", like('P_RETURN_STATUS',"%ERROR%"),"ERROR",like('P_MESSAGE',"%NO NEW BATCH EXISTS%") AND like('P_RETURN_STATUS',"%ERROR%"),"SUCCESS")      
Hello evryone,  I have a probleme with automatic lookup and role users.  I will explain :  With admin role, everythin is okay, the automatic lookup work and no warnings on my dashboard.  When... See more...
Hello evryone,  I have a probleme with automatic lookup and role users.  I will explain :  With admin role, everythin is okay, the automatic lookup work and no warnings on my dashboard.  When I use an another role, it doesn't work  I've checked all the rights for lookups and knowledge objects, assigning them to the role in question. But nothing works; I'm really out of ideas. Thank you very much for your help.  
Hi, I am trying to mask some passwords but I cannot figure out the proper props.conf (ha!) for it. It works on the fly but not when I try to set it in props.conf this is my mask on the fly, basic... See more...
Hi, I am trying to mask some passwords but I cannot figure out the proper props.conf (ha!) for it. It works on the fly but not when I try to set it in props.conf this is my mask on the fly, basically just replace the password with some characters: rex mode=sed field=ms_Mcs_AdmPwd "s/ms_Mcs_AdmPwd=(\w+)/###\2/g"\   and this is the raw data from sourcetype: ActiveDirectory Additional Details:                                   msLAPS-PasswordExpirationTime=133579223312233231                                   ms-Mcs-AdmPwd=RlT34@iw4dasdasd   How would I do this in props.conf or transform.conf ?   Oliver
Tje splunk readiness app, cannot determine if Mission Control app is python compatible
Currently, I have a search that returns the following: Search: index=index1 sourcetype=sourcetype1 | table host, software{} host                 software hostname       cpe:/a:vendor:product:... See more...
Currently, I have a search that returns the following: Search: index=index1 sourcetype=sourcetype1 | table host, software{} host                 software hostname       cpe:/a:vendor:product:version                             cpe:/a:vendor:product:version                             cpe:/a:vendor:product:version                             cpe:/a:vendor:product:version                             cpe:/a:vendor:product:version hostname       cpe:/a:vendor:product:version                             ...                             ... Here, there are multiple software tied to one hostname, and the software is under one field called software{}. What I am looking for is a way to split the software field into 3 fields by extracting the vendor, the product and the version into 3 separate fields to return: host                 software_vendor                   software_product             software_version hostname       vendor                                       product                                  version                             vendor                                       product                                  version                             vendor                                       product                                  version                             vendor                                       product                                  version                             vendor                                       product                                  version hostname       vendor                                       product                                  version                             ...                             ... Does anyone have any ideas?
Is there a way to use Splunk to find out if wireshark is installed on any of the systems? Is there a query for this
We are currently indexing big log files (~1 GB in size) in our Splunk indexer using Splunk Universal Forwarder. All the logs data will be stored in a single index. We want to make sure the logs dat... See more...
We are currently indexing big log files (~1 GB in size) in our Splunk indexer using Splunk Universal Forwarder. All the logs data will be stored in a single index. We want to make sure the logs data is deleted after one week from the date it was indexed.  Is there a way to achieve the same?
HI, I want to embed the dashboard in my own webpage. First, I found the "EDFS" APP, but after installing it and following the steps, I didn't see the "EDFS" option in the input. When this include in... See more...
HI, I want to embed the dashboard in my own webpage. First, I found the "EDFS" APP, but after installing it and following the steps, I didn't see the "EDFS" option in the input. When this include in the HTML file, it will responds with "refused to connect" :   <iframe src="https://127.0.0.1:9999" seamless frameborder="no" scrolling="no" width="1200" height="2500"></iframe>   Also, if add "trustedIP=127.0.0.1" in the server.conf file, when open Splunk web using "127.0.0.1:8000", it shows an "Unauthorized" error. Additionally, I found that adding "x_frame_options_sameorigin = 0" and "enable_insecure_login = true" in the web.conf file, and including this in the HTML file:   <iframe src="http://splunkhost/account/insecurelogin?username=viewonlyuser&password=viewonly&return_to=/app/search/dashboardname" seamless frameborder="no" scrolling="no" width="1200" height="2500"></iframe>   It will show the Splunk web login page, with the error message "No cookie support detected. Check your browser configuration." If try to login with the username and password, it still doesn't work and shows a "Server error" message. If use Firefox's incognito window open the HTML file, it will skip the login page and display "Unauthorized." Is there a way to solve these issues or alternative methods to display the dashboard on an external webpage? Thanks in advance.
We have recently migrated to smart store post migration SF and RF are not met. Can anyone help me with the  troubleshooting steps.
Hi Splunkers, I have a question about a possible issues on UF management via Deploymet Server. On a customer env, some UFs have been installed on Windows server. They send data to a dedicated HF. No... See more...
Hi Splunkers, I have a question about a possible issues on UF management via Deploymet Server. On a customer env, some UFs have been installed on Windows server. They send data to a dedicated HF. Now, we want manage them with a Deployment Server. The point is this: those UF have been installed with graphical wizard. During this installation, it has been set which data to collect and to send to HF. So, the inputs.conf has been set during this phase, in a GUI manner. Now, in a Splunk course material (I don't remeber one, it should be the Splunk Enterprise Admin one), I got this warning: if inputs.conf, for Windows UF, are set with graphical wizard like in our case, Deployment Server could get some problems in interact with them, even it could not be able to manage them. Is this confirmed? Do you know in which section on documentation I can find evidence of this?
Hello, first of all, sorry for my bad English, I hope you can understand everything. My goal is to get the journald logs from the universalforwarder in JSON format to Splunk. (Splunk/UF Version 9.1... See more...
Hello, first of all, sorry for my bad English, I hope you can understand everything. My goal is to get the journald logs from the universalforwarder in JSON format to Splunk. (Splunk/UF Version 9.1.2) I use the app jorunald_input. inputs.conf (UF)     [journald://sshd] index = test sourcetype = test journalctl-filter = _SYSTEMD_UNIT=sshd.service       I've tried different props.conf functions. For example, something like this:  props.conf (UF)     [test] INDEXED_EXTRACTIONS = json KV_MODE = json SHOULD_LINEMERGE=false #INDEXED_EXTRACTIONS =json #NO_BINARY_CHECK=true #AUTO_KV_JSON = true       On the UF I check with the command     ps aux | grep journalctl     whether the query is enabled. It displays this command     journalctl -f -o json --after-cursor s=a12345ab1abc12ab12345a01f1e920538;i=43a2c;b=c7efb124c33f43b0b0142ca0901ca8de;m=11aa0e450a21;t=233ae3422cd31;x=00af2c733a2cdfe7 _SYSTEMD_UNIT=sshd.service -q --output-fields PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE,MESSAGE     I can try it out by using this command in the cli But I have to take out that part "--after-cursor ...." So I run the following command on the CLI to keep track of the journald logs:     journalctl -f -o json _SYSTEMD_UNIT=sshd.service -q --output-fields PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE,MESSAGE     On the Universal forwarder, the tracked journald logs will then look like this:  (It would be a nice JSON format)     { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2c;b=a1aaa111a11aaa111aa000a0101;m=11aa00c5b9a0;t=233ae39a37aa2;x=00af2c733a2cdfe7", "__REALTIME_TIMESTAMP" : "1710831664593570", "__MONOTONIC_TIMESTAMP" : "27194940570016", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "Invalid user asdf from 111.11.111.111 port 111", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2d;b=a1aaa111a11aaa111aa000a0101;m=11aa00ec25bf;t=233ae39c9e6c0;x=10ac2c735c2cdfe7", "__REALTIME_TIMESTAMP" : "1710831667111616", "__MONOTONIC_TIMESTAMP" : "27194943088063", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): check pass; user unknown", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2e;b=a1aaa111a11aaa111aa000a0101;m=11aa00ec278a;t=233ae39c9e88c;x=5fb4c21ae6130519", "__REALTIME_TIMESTAMP" : "1710831667112076", "__MONOTONIC_TIMESTAMP" : "27194943088522", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2f;b=a1aaa111a11aaa111aa000a0101;m=11aa0108f5bf;t=233ae39e6b6c0;x=d072e90acf887129", "__REALTIME_TIMESTAMP" : "1710831668999872", "__MONOTONIC_TIMESTAMP" : "27194944976319", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a30;b=a1aaa111a11aaa111aa000a0101;m=11aa010e0295;t=233ae39ebc397;x=d1eb29e00003daa7", "__REALTIME_TIMESTAMP" : "1710831669330839", "__MONOTONIC_TIMESTAMP" : "27194945307285", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): check pass; user unknown", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a31;b=a1aaa111a11aaa111aa000a0101;m=11aa012f0b3c;t=233ae3a0ccc3e;x=c33e28a6111c89ea", "__REALTIME_TIMESTAMP" : "1710831671495742", "__MONOTONIC_TIMESTAMP" : "27194947472188", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a32;b=a1aaa111a11aaa111aa000a0101;m=11aa0135591b;t=233ae3a131a1d;x=45420f6d2ca07377", "__REALTIME_TIMESTAMP" : "1710831671908893", "__MONOTONIC_TIMESTAMP" : "27194947885339", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "PRIORITY" : "3", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "error: Received disconnect from 111.11.111.111 port 111:11: Unable to authenticate [preauth]" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a33;b=a1aaa111a11aaa111aa000a0101;m=11aa01355bee;t=233ae3a131cf0;x=15b1aa1201a45cdf", "__REALTIME_TIMESTAMP" : "1710831671909616", "__MONOTONIC_TIMESTAMP" : "27194947886062", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Disconnected from invalid user asdf 111.11.111.111 port 111 [preauth]" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a34;b=a1aaa111a11aaa111aa000a0101;m=11aa01355c42;t=233ae3a131d45;x=123f45a09e00a8a2", "__REALTIME_TIMESTAMP" : "1710831671909701", "__MONOTONIC_TIMESTAMP" : "27194947886146", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111", "_PID" : "1430615" }      (Example)    But when I look for the logs on the search head, they look like this:     Invalid user asdf from 111.11.111.111 port 111pam_unix(sshd:auth): check pass; user unknownpam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2pam_unix(sshd:auth): check pass; user unknownFailed password for invalid user asdf from 111.11.111.111 port 111 ssh2error: Received disconnect from 111.11.111.111 port 111:11: Unable to authenticate [preauth]Disconnected from invalid user asdf 111.11.111.111 port 111 [preauth]PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111       Does anyone know why the logs are written together and not to be considered individually? And why the logs are not in JSON format? Can anyone tell me a solution for this on how to fix the problem?   Thank you very much!
In our dashboard, a user reported that she got "Search was cancelled" message when she used it. I came to know that multiple searches were run concurrently. Hence, it could have been cancelled. But ... See more...
In our dashboard, a user reported that she got "Search was cancelled" message when she used it. I came to know that multiple searches were run concurrently. Hence, it could have been cancelled. But when I tried to reproduce this issue, the searches kept getting queued and ran but was never delayed.  What are the scenario when a search is queued vs when it is cancelled?
Hi. How can I change the background color of pie dynamically through drop-down selection ? Is it okay to look like this in the picture below? <form theme="dark"> <label>Test2</label> <fieldset... See more...
Hi. How can I change the background color of pie dynamically through drop-down selection ? Is it okay to look like this in the picture below? <form theme="dark"> <label>Test2</label> <fieldset submitButton="false" autoRun="true"></fieldset> <row> <panel> <input type="dropdown" token="color_select"> <label>Background</label> <choice value="#175565">Color1</choice> <choice value="#475565">Color2</choice> </input> <chart> <search> <query>index=p1991_m_tiltline_index_json_raw | top vin.number</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.backgroundColor">$color_selectiono$</option> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.fontColor">#99CCFF</option> <option name="charting.foregroundColor">#EBF5FF</option> <option name="charting.legend.placement">right</option> <option name="charting.seriesColors">[0xEBF0F5,0xC2D1E0,0x99B2CC,0x7094B8,0x4775A3,0x2E5C8A,0x24476B,0x1A334C,0x0F1F2E,0x050A0F]</option> <option name="trellis.enabled">0</option> <option name="trellis.size">large</option> <option name="height">300</option> </chart> </panel>
We had a Splunk Enterprise installation (9.2.0.1) on Windows Server 2019, and upgraded to Windows Server 2022 today. Splunk is only set up  for local event log collection; events forwarded from othe... See more...
We had a Splunk Enterprise installation (9.2.0.1) on Windows Server 2019, and upgraded to Windows Server 2022 today. Splunk is only set up  for local event log collection; events forwarded from other workstations. The Windows subscription & forwarded events are working, but Splunk isn't ingesting newer logs since the inplace upgrade to Server 2022. I can't seem to access Splunk's Event Log Collection settings since the upgrade either, and am met with a "Permission error". I have restarted the server fully. Am tempted to re-install Splunk as well. Any ideas?   Edit: Running with free Splunk Enterprise license (<500MB / day ingestion). Service is run with separate domain user service account. Only used to ingest local event logs that have been forwarded from other workstations. Can't see any other configuration which has changed. inputs.conf [default] host = <servername> [WinEventLog://ForwardedEvents] disabled = false index = applocker renderXml = true blacklist = 111
Hello, I have been working on Splunk for a few months now, and we are using Splunk mainly for Cyber Security monitoring. I am wondering with regards to data model (CIM) should I create separate data... See more...
Hello, I have been working on Splunk for a few months now, and we are using Splunk mainly for Cyber Security monitoring. I am wondering with regards to data model (CIM) should I create separate data model for different zones, or should I combine all in a single data model. For example, I have 3 different independent network zones, DMZ, Zone A and Zone B. Each of those zones will have multiple indexes linked to it. Shall I actually use the default data model in CIM, eg datamodel=Authentication with all the indexes in DMZ, ZoneA and ZoneB, or should I make copies of datamodel? Scenario 1: If I use a common data model, I will use where source=xxx for example to try to split things out for my queries and dashboarding. Scenario 2: If I use a separate data model, I will have, datamodel= DMZ_Authentication, datamodel=ZoneA_Authentication and perhaps use append when I need to see the overall picture? Still confused on which is the best approach.