All Topics

Top

All Topics

Hello,    I would like to convert my hexadecimal code to a bit value based on this calculation.  Hex code - 0002 Seperate 2 bytes each  00/02 2 Byte bitmask Byte 0: HEX = 00 - 0000 0000 B... See more...
Hello,    I would like to convert my hexadecimal code to a bit value based on this calculation.  Hex code - 0002 Seperate 2 bytes each  00/02 2 Byte bitmask Byte 0: HEX = 00 - 0000 0000 Byte 1: HEX = 02 - 0000 0010 Byte 1 Byte0 - 0000 0010 0000 0000 calculate the non zero th position values from right side Byte combination  - 0000 0010 0000 0000 Position -                      9 8765  4321 At position 10, we got 1 while counting from right side. so the bit value is 9. I need to calculate this in splunk, where the HEX_Code is the value from the lookup. Thanks in Advance! Happy Splunking!
Hi,   Join is not returning the data with subsearch, I tried many options from other answers but nothing working out. Target is to check how many departments are using latest version of some so... See more...
Hi,   Join is not returning the data with subsearch, I tried many options from other answers but nothing working out. Target is to check how many departments are using latest version of some software compare to all older versions together.    My search query index=abc version!="2.0" | dedup version thumb_print | stats count(thumb_print) as OLD_RUNS by department | join department [search index=abc version="2.0" | dedup version thumb_print | stats count(thumb_print) as NEW_RUNS by department ] | eval total=OLD_RUNS + NEW_RUNS| fillnull value=0 | eval perc=((NEW_RUNS/total)*100) | eval department=substr(department, 1, 50) | eval perc=round(perc, 2) | table department OLD_RUNS NEW_RUNS perc | sort -perc Overall this search over 1 week time period expected to return more than 100k events. 
In our App/Add-on  python code we need access to Python library which allows to encode and decode JSON Web Tokens (JWT).  Currently we packaged cffi and PyJWT under lib with necessary  cffi backend r... See more...
In our App/Add-on  python code we need access to Python library which allows to encode and decode JSON Web Tokens (JWT).  Currently we packaged cffi and PyJWT under lib with necessary  cffi backend required for each OS.  I.e  for linux : _cffi_backend.cpython-37m-x86_64-linux-gnu.so and for Windows :  _cffi_backend.cp37-win_amd64.pyd.    This worked until recently.  where we updated the Add-on Splunk-sdk-python to 2.0.2 and the Add-on started failing on Splunk Cloud environment.   Error: No module named '_cffi_backend'.  What OS and version is running the splunk cloud? and Is there any way to invoke  python library install command 'pip install pyjwt' while add-on install ? 
Hi, I am trying to render a network of my data using react-viz in the dashboard of my Splunk App . For the past few days, I have been trying various things to get the code to work, but all I see is ... See more...
Hi, I am trying to render a network of my data using react-viz in the dashboard of my Splunk App . For the past few days, I have been trying various things to get the code to work, but all I see is a blank screen. I have pasted my code below. Please let me know if you can identify where I might be going wrong.   network_dashboard.js:             require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { function loadScript(url) { return new Promise((resolve, reject) => { const script = document.createElement('script'); script.src=url; script.onload = resolve; script.onerror = reject; document.head.appendChild(script); }); } function waitForReact() { return new Promise((resolve) => { const checkReact = () => { if (window.React && window.ReactDOM && window.vis) { resolve(); } else { setTimeout(checkReact, 100); } }; checkReact(); }); } Promise.all([ loadScript('https://unpkg.com/react@17/umd/react.production.min.js'), loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js'), loadScript('https://unpkg.com/vis-network/dist/vis-network.min.js') ]) .then(waitForReact) .then(() => { console.log('React, ReactDOM, and vis-network are loaded and available'); initApp(); }) .catch(error => { console.error('Error loading scripts:', error); }); function initApp() { const NetworkPage = () => { const [nodes, setNodes] = React.useState([]); const [edges, setEdges] = React.useState([]); const [loading, setLoading] = React.useState(true); const [clickedEdge, setClickedEdge] = React.useState(null); const [clickedNode, setClickedNode] = React.useState(null); const [showTransparent, setShowTransparent] = React.useState(false); React.useEffect(() => { // Static data for debugging const staticNodes = [ {'id': 1, 'label': 'wininit.exe', 'type': 'process', 'rank': 0}, {'id': 2, 'label': 'services.exe', 'type': 'process', 'rank': 1}, {'id': 3, 'label': 'sysmon.exe', 'type': 'process', 'rank': 2}, {'id': 4, 'label': 'comb-file', 'type': 'file', 'rank': 1, 'nodes': [ 'c:\\windows\\system32\\mmc.exe', 'c:\\mozillafirefox\\firefox.exe', 'c:\\windows\\system32\\cmd.exe', 'c:\\windows\\system32\\dllhost.exe', 'c:\\windows\\system32\\conhost.exe', 'c:\\wireshark\\tshark.exe', 'c:\\confer\\repwmiutils.exe', 'c:\\windows\\system32\\searchprotocolhost.exe', 'c:\\windows\\system32\\searchfilterhost.exe', 'c:\\windows\\system32\\consent.exe', 'c:\\python27\\python.exe', 'c:\\windows\\system32\\audiodg.exe', 'c:\\confer\\repux.exe', 'c:\\windows\\system32\\taskhost.exe' ]}, {'id': 5, 'label': 'c:\\wireshark\\dumpcap.exe', 'type': 'file', 'rank': 1}, {'id': 6, 'label': 'c:\\windows\\system32\\audiodg.exe', 'type': 'file', 'rank': 1} ]; const staticEdges = [ {'source': 1, 'target': 2, 'label': 'procstart', 'alname': null, 'time': '2022-07-19 16:00:17.074477', 'transparent': false}, {'source': 2, 'target': 3, 'label': 'procstart', 'alname': null, 'time': '2022-07-19 16:00:17.531504', 'transparent': false}, {'source': 4, 'target': 3, 'label': 'moduleload', 'alname': null, 'time': '2022-07-19 16:01:03.194938', 'transparent': false}, {'source': 5, 'target': 3, 'label': 'moduleload', 'alname': 'Execution - SysInternals Use', 'time': '2022-07-19 16:01:48.497418', 'transparent': false}, {'source': 6, 'target': 3, 'label': 'moduleload', 'alname': 'Execution - SysInternals Use', 'time': '2022-07-19 16:05:04.581065', 'transparent': false} ]; const sortedEdges = staticEdges.sort((a, b) => new Date(a.time) - new Date(b.time)); const nodesByRank = staticNodes.reduce((acc, node) => { const rank = node.rank || 0; if (!acc[rank]) acc[rank] = []; acc[rank].push(node); return acc; }, {}); const nodePositions = {}; const rankSpacingX = 200; const ySpacing = 100; Object.keys(nodesByRank).forEach(rank => { const nodesInRank = nodesByRank[rank]; nodesInRank.sort((a, b) => { const aEdges = staticEdges.filter(edge => edge.source === a.id || edge.target === a.id); const bEdges = staticEdges.filter(edge => edge.source === b.id || edge.target === b.id); return aEdges.length - bEdges.length; }); const totalNodesInRank = nodesInRank.length; nodesInRank.forEach((node, index) => { nodePositions[node.id] = { x: rank * rankSpacingX, y: index * ySpacing - (totalNodesInRank * ySpacing) / 2, }; }); }); const positionedNodes = staticNodes.map(node => ({ ...node, x: nodePositions[node.id].x, y: nodePositions[node.id].y, })); setNodes(positionedNodes); setEdges(sortedEdges); setLoading(false); }, []); const handleNodeClick = (event) => { const { nodes: clickedNodes } = event; if (clickedNodes.length > 0) { const nodeId = clickedNodes[0]; const clickedNode = nodes.find(node => node.id === nodeId); setClickedNode(clickedNode || null); } }; const handleEdgeClick = (event) => { const { edges: clickedEdges } = event; if (clickedEdges.length > 0) { const edgeId = clickedEdges[0]; const clickedEdge = edges.find(edge => `${edge.source}-${edge.target}` === edgeId); setClickedEdge(clickedEdge || null); } }; const handleClosePopup = () => { setClickedEdge(null); setClickedNode(null); }; const toggleTransparentEdges = () => { setShowTransparent(prevState => !prevState); }; if (loading) { return React.createElement('div', null, 'Loading...'); } const formatFilePath = (filePath) => { const parts = filePath.split('\\'); if (filePath.length > 12 && parts[0] !== 'comb-file') { return `${parts[0]}\\...`; } return filePath; }; const filteredNodes = showTransparent ? nodes : nodes.filter(node => edges.some(edge => (edge.source === node.id || edge.target === node.id) && !edge.transparent) ); const filteredEdges = showTransparent ? edges : edges.filter(edge => !edge.transparent); const options = { layout: { hierarchical: false }, edges: { color: { color: '#000000', highlight: '#ff0000', hover: '#ff0000' }, arrows: { to: { enabled: true, scaleFactor: 1 } }, smooth: { type: 'cubicBezier', roundness: 0.2 }, font: { align: 'top', size: 12 }, }, nodes: { shape: 'dot', size: 20, font: { size: 14, face: 'Arial' }, }, interaction: { dragNodes: true, hover: true, selectConnectedEdges: false, }, physics: { enabled: false, stabilization: { enabled: true, iterations: 300, updateInterval: 50 }, }, }; const graphData = { nodes: filteredNodes.map(node => { let label = node.label; if (node.type === 'file' && node.label !== 'comb-file') { label = formatFilePath(node.label); } return { id: node.id, label: label, title: node.type === 'file' ? node.label : '', x: node.x, y: node.y, shape: node.type === 'process' ? 'circle' : node.type === 'socket' ? 'diamond' : 'box', size: node.type === 'socket' ? 40 : 20, font: { size: node.type === 'socket' ? 10 : 14, vadjust: node.type === 'socket' ? -50 : 0 }, color: { background: node.transparent ? "rgba(151, 194, 252, 0.5)" : "rgb(151, 194, 252)", border: "#2B7CE9", highlight: { background: node.transparent ? "rgba(210, 229, 255, 0.1)" : "#D2E5FF", border: "#2B7CE9" }, }, className: node.transparent && !showTransparent ? 'transparent' : '', }; }), edges: filteredEdges.map(edge => ({ from: edge.source, to: edge.target, label: edge.label, color: edge.alname && edge.transparent ? '#ff9999' : edge.alname ? '#ff0000' : edge.transparent ? '#d3d3d3' : '#000000', id: `${edge.source}-${edge.target}`, font: { size: 12, align: 'horizontal', background: 'white', strokeWidth: 0 }, className: edge.transparent && !showTransparent ? 'transparent' : '', })), }; // Render the network visualization return React.createElement( 'div', { className: 'network-container' }, React.createElement( 'button', { className: 'toggle-button', onClick: toggleTransparentEdges }, showTransparent ? "Hide Transparent Edges" : "Show Transparent Edges" ), React.createElement( 'div', { id: 'network' }, React.createElement(vis.Network, { graph: graphData, options: options, events: { select: handleNodeClick, doubleClick: handleEdgeClick } }) ), clickedNode && React.createElement('div', { className: 'popup' }, React.createElement('button', { onClick: handleClosePopup }, 'Close'), React.createElement('h2', null, `Node: ${clickedNode.label}`), React.createElement('p', null, `Type: ${clickedNode.type}`) ), clickedEdge && React.createElement('div', { className: 'popup' }, React.createElement('button', { onClick: handleClosePopup }, 'Close'), React.createElement('h2', null, `Edge: ${clickedEdge.label}`), React.createElement('p', null, `AL Name: ${clickedEdge.alname || 'N/A'}`) ) ); }; const rootElement = document.getElementById('root'); if (rootElement) { ReactDOM.render(React.createElement(NetworkPage), rootElement); } else { console.error('Root element not found'); } } });             network_dashboard.css:             /* src/components/NetworkPage.css */ .network-container { height: 100vh; width: 100vw; display: flex; justify-content: center; align-items: center; position: relative; } #network-visualization { height: 100%; width: 100%; } /* Toggle button styling */ .toggle-button { /* position: absolute;*/ top: 10px; left: 10px; background-color: #007bff; color: white; border: none; border-radius: 20px; padding: 8px 16px; font-size: 14px; cursor: pointer; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); } .toggle-button:hover { background-color: #0056b3; } /* Popup styling */ .popup { background-color: white; border: 1px solid #ccc; padding: 10px; border-radius: 8px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); font-size: 14px; width: 100%; height: 100%; position: relative; } /* Custom Scrollbar Styles */ .scrollable-popup { max-height: 150px; overflow-y: auto; scrollbar-width: thin; /* Firefox */ scrollbar-color: transparent; /* Firefox */ } .scrollable-popup::-webkit-scrollbar { width: 8px; /* WebKit */ } .scrollable-popup::-webkit-scrollbar-track { background: transparent; /* WebKit */ } .scrollable-popup::-webkit-scrollbar-thumb { background: grey; /* WebKit */ border-radius: 8px; } .scrollable-popup::-webkit-scrollbar-thumb:hover { background: darkgrey; /* WebKit */ } /* Popup edge and node styling */ .popup-edge { border: 2px solid #ff0000; color: #333; } .popup-node { border: 2px solid #007bff; color: #007bff; } .close-button { position: absolute; top: 5px; right: 5px; background: transparent; border: none; font-size: 16px; cursor: pointer; } .close-button:hover { color: red; }               network_dashboard.xml             <dashboard script="network_dashboard.js" stylesheet="network_dashboard.css"> <label>Network Visualization</label> <row> <panel> <html> <div id="root" style="height: 800px;"></div> </html> </panel> </row> </dashboard>              
Today and every day, Splunk celebrates the importance of customer experience throughout our product, employees, and community. On October 1st, we’ll be taking our celebration to the next level by par... See more...
Today and every day, Splunk celebrates the importance of customer experience throughout our product, employees, and community. On October 1st, we’ll be taking our celebration to the next level by participating in CX Day. The Customer Experience Professionals Association, focus for CX Day 2024 is “Good CX delivers better outcomes for customers, employees and organizations.” To build off this theme, Splunk developed an interactive quiz to help you, our customers identify your CX emotional quotient, and why this can be important to you as an employee or how you serve your customers in this digital marketplace.  CX Day is on Tuesday, October 1st. To celebrate, we’ve arranged a remarkable panel to dive into the critical impact of an exceptional customer experience. Join Toni Pavlovich, Splunk’s Chief Customer Officer (and SVP of Customer and Partner Success at Cisco) and our panelists Dr. Bonnie An Henderson, President and CEO of HelpMeSee and Leonard Wall, Deputy CISO from Clayton Homes for an engaging discussion on elevating customer success for a brighter future by looking at how these organizations are positively impacting their customer’s lives. We sincerely thank you for being a part of our Splunk community and love to hear how you feel about customer experience within your organization.  Mark your calendars for 10/1 to join the CX Day LinkedIn Live discussion with us as we celebrate Customer Success everyday!
Hi,  I downloaded the mac intel version 4.2.1 of the app to use numpy and pandas. I copied over exec_anaconda.py as per the README and also util.py (exec_anaconda.py uses it), added a test script wi... See more...
Hi,  I downloaded the mac intel version 4.2.1 of the app to use numpy and pandas. I copied over exec_anaconda.py as per the README and also util.py (exec_anaconda.py uses it), added a test script with the preamble mentioned in the README:   #!/usr/bin/python import exec_anaconda exec_anaconda.exec_anaconda() import pandas as pd import sys print (sys.path)   This runs but triggers mac security alerts for a whole bunch of files (easily more than 25 and some need multiple clicks). I have "Allow applications downloaded from App store and identified developers" in my security settings.  Given that this package is from Splunk, can Splunk codesign it  (or whatever else is needed) so it is marked as from an identified developer? Or is there a setting I can use to turn off the warnings for everything from a single tar.gz or everything under a folder etc? I'm on mac Sonoma 14.6 running Splunk 9.2.2  Thanks
Hi Splunkers, I have a question and I need help from experts, I'm working on creating a heartbeat tracker search that monitor when a host gets span up, and it's a window or Linux it gets generic app... See more...
Hi Splunkers, I have a question and I need help from experts, I'm working on creating a heartbeat tracker search that monitor when a host gets span up, and it's a window or Linux it gets generic apps from the server class, so there is a server class built out there that is just looking for any host that isn't already in the server class. So the purpose of the heartbeat tracker is to inform us that there is a brand-new host that isn't in the server class, so the ask is to track the hosts that showing up in the heartbeat index and if these hosts are there for multiple days that means they need to be addressed, as an example every host that get span up whether we know about it or not is going to get the heartbeat initially, so it's going to span up, and it's going to get the heartbeat and once it's get to its real app it's going to stop sending logs to the heartbeat index, so what I really want to know is per host how many days has it been talking to the X index so if I get a host that has been talking to the X index for several days then I know that isn't the initial start up, it's a problem that need to be looked at. | tstats count where index=X by host index span=1d _time
I'm trying to build a Local Attack Range but it fails when it tries to restart the splunk.service. The Splunk instance does restart but fails when the systemctl command is implemented. I did insure t... See more...
I'm trying to build a Local Attack Range but it fails when it tries to restart the splunk.service. The Splunk instance does restart but fails when the systemctl command is implemented. I did insure that THPs was disabled, seLinux was disabled and ulimits were set properly on the host. It did increate the timeout but it fails to restart even after 30 minutes. The "python attack_range.py build" does successfully create the Splunk instance and installs all the required apps & TAs. It just fails to restart once the Splunk Enterprise as a systemd service within the Vagrant VM.  Any feedback would be appreciated!!! TASK [splunk_server_post : change password splunk] ***************************** changed: [ar-splunk-attack-range-key-pair-ar] TASK [splunk_server_post : restart splunk] ************************************* fatal: [ar-splunk-attack-range-key-pair-ar]: FAILED! => {"changed": false, "msg": "Unable to restart service splunk: Job for splunk.service failed because a timeout was exceeded.\nSee \"systemctl status splunk.service\" and \"journalctl -xe\" for details.\n"} RUNNING HANDLER [splunk_server_post : restart splunk] ************************** PLAY RECAP ********************************************************************* ar-splunk-attack-range-key-pair-ar : ok=139 changed=64  unreachable=0    failed=1    skipped=0    rescued=0    ignored=0    Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. 2024-09-19 16:22:49,709 - ERROR - attack_range - vagrant failed to build (attack-range-py3.8) aradmin@attackrange:~/attack_range$ Here is my attack_range yml file: general: attack_range_password: "xxxxx" cloud_provider: local use_prebuilt_images_with_packer: "0" ingest_bots3_data: "1" local: splunk_server: # Enable Enterprise Security install_es: "1" # Save to the apps folder from Attack Range splunk_es_app: "splunk-enterprise-security_732.spl" phantom_server: phantom_server: "0" # Enable/Disable Phantom Server kali_server: kali_server: "1" windows_servers: - hostname: ar-win-dc windows_image: windows-server-2022 create_domain: '1' install_red_team_tools: '1' bad_blood: '1' - hostname: ar-win-2 windows_image: windows-2019-v3-0-0 join_domain: '1' install_red_team_tools: '1' linux_servers: - hostname: ar-linux
I have the Splunk App for SOAR Export running.  I can open one of the forwarding events, click "Save and Preview' and send any events into SOAR,  This is working.   I can go into the Searches, repor... See more...
I have the Splunk App for SOAR Export running.  I can open one of the forwarding events, click "Save and Preview' and send any events into SOAR,  This is working.   I can go into the Searches, reports, and alerts area find the alert the app created, it's scheduled, running and finding notables.  This is working. What's not working is when the schedule alert runs, what it finds never gets sent into SOAR. So, manually sending to SOAR works from the app, the scheduled alert the app uses is running and finding notables, but nothing ever goes into SOAR.  The owner is nobody for all of the searches.  Is this a permissions issue maybe?
I have a hidden search. When I have a result I want to set the token based on that result, otherwise if I don't have any results I want to set the token to *. However, this does not work for me yet (... See more...
I have a hidden search. When I have a result I want to set the token based on that result, otherwise if I don't have any results I want to set the token to *. However, this does not work for me yet (the no results part with setting the token to all).     <search id="latest_product_id"> <query> | mysearch | head 1 | fields product_id </query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>60</refresh> <depends> <condition token="some_token">*</condition> </depends> <done> <condition match="'job.resultCount'!= 0"> <set token="latest_product_id">$result.product_id$</set> </condition> <condition match="'job.resultCount'== 0"> <set token="latest_product_id">*</set> </condition> </done> </search>  
We have recently moved to a new splunk environment and have formally cut away from the old one. The new environment works great and the  data is flowing as expected.  We now have a few years worth of... See more...
We have recently moved to a new splunk environment and have formally cut away from the old one. The new environment works great and the  data is flowing as expected.  We now have a few years worth of data in splunk sitting on servers that are going to be repurposed. My question is what is the best way to move all that data out of splunk.  I was thinking of just freezing the index's and moving the frozen index's to s3 but I am not sure if that is the best way to do it. Any suggestions would be welcome. Thanks
  How can I remove ONLY the overlay Total on a visualization? TIA  
So I have a SPL and it searchs an Index and brings back over 1.8 Million events I have done some evals to get the Project, Size of file and Speed. What I want to do is just to list the top 10 speed... See more...
So I have a SPL and it searchs an Index and brings back over 1.8 Million events I have done some evals to get the Project, Size of file and Speed. What I want to do is just to list the top 10 speeds and their relevant Project (It could be the same project is listed 10 times) I have done something with stats(sum) but I don't want the sum.... Out of the 1.8 Million I need to just show the top 10 events and speed and it project number My fields from eval are ProjectID, MB is the size and speed is SecTM is the speed I seem to be stuck on Splunk doing a sum for the entire Project and I guess that would be true since I am using sum
Hi all. I am running into an issue with the Azure AD Graph asset in SOAR. I had an app created in Azure app registrations with the correct permissions based on the documentation. I configured the ass... See more...
Hi all. I am running into an issue with the Azure AD Graph asset in SOAR. I had an app created in Azure app registrations with the correct permissions based on the documentation. I configured the asset in SOAR with the corresponding tenant, app, and secret information. The redirect URI was entered into the Azure app registration page with /result per the instructions. When I test connectivity, the test will time out after about a minute. I may have missed something in the documentation, but the configuration all seems correct. Has anyone else run into this?
Hi together, I try to compare the PERC90 response times of an application before and after a software release for the 50 most used actions. Here's the query index=myindex source=mysource | rex... See more...
Hi together, I try to compare the PERC90 response times of an application before and after a software release for the 50 most used actions. Here's the query index=myindex source=mysource | rex field=_raw "^(?:[^;\n]*;){4}\s+(?P<utc_tsl_tranid>\w+:\w+)" | rex field=_raw "^.+\/(?P<ui_locend>\w+\.[a-z_-]+\.\w+\.\w+)" | dedup utc_tsl_tranid | stats sum(DURATION) as weight by ui_locend | sort - weight | head 50 Is there a way I can compare 2 time periods (for example: first start 2024-08-10 end 2024-08-15, second start 2024-08-20 end 2024-08-25).  Field ui_locend has to match and I like to compare PERC(90) of DURATION, which can be calculated with STATS-Command. It's a tricky one, will appreciate every idea.
We are trying to ingest a STIX file into the Threat Intelligence Management, the STIX parses, but does not find anything of interest in the file. the _internal index has the message 'status="No obse... See more...
We are trying to ingest a STIX file into the Threat Intelligence Management, the STIX parses, but does not find anything of interest in the file. the _internal index has the message 'status="No observables or indicators found in file"' The STIX file has the format below (which from what I can tell is a valid format, containing indicators       { "more": false, "objects": [ { "confidence": "70", "created": "2023-09-08T00:02:39.000Z", "description": "xxxxxxxxx", "id": "xxxxxxx", "modified": "2023-09-08T00:02:39.000Z", "name": "xxxxxxx", "pattern": "[ipv4-addr:value = '101.38.159.17']", "spec_version": "2.1", "type": "indicator", "valid_from": "2023-09-08T00:02:39.000Z", "valid_until": "2025-11-07T00:02:39.000Z" }, ......         Has anyone had any success with STIX files and be able to share the basic format of what worked for them?  Or anyone have anything other to suggest? Many thanks Simon   Splunk Enterprise Security 
Hi I've seen many recent changes on SOAR 6.3 regarding prompts, but I still don't see a way to define the allowed choices list as a parameter while creating a prompt block from the GUI. Many times ... See more...
Hi I've seen many recent changes on SOAR 6.3 regarding prompts, but I still don't see a way to define the allowed choices list as a parameter while creating a prompt block from the GUI. Many times the options that are available to the user are dynamic, so hard-coding the choices list isn't practical for the user, is prone to get out of date and force playbook redeployments. The only way I see so far is by using code blocks or by adding custom code to the prompt blocks (and losing the GUI handling in the process). Is there any way I'm missing to get the question choices from a datapath or a custom list?
I am new to Splunk administration, and I need a query that captures changes to configuration of switches, firewalls, routers etc, in my environment
So I've got a list containing multiple strings, depending on these strings I want to run 1 or more actions using a filter. When I use the 'in' filter to check if a certain string is in the list the m... See more...
So I've got a list containing multiple strings, depending on these strings I want to run 1 or more actions using a filter. When I use the 'in' filter to check if a certain string is in the list the matching condition is not met.  Example input = ['block_ioc', 'reset_password'] Filter block: I can successfully use the 'in' condition in a decision block, just not a filter block.    Any ideas?   
Ref Doc - Splunk Add-on for GCP Docs Currently, the Cloud Storage Bucket input doesn’t support pre-processing of data, such as untar/unzip/ungzip/etc. The data must be pre-processed and ready for in... See more...
Ref Doc - Splunk Add-on for GCP Docs Currently, the Cloud Storage Bucket input doesn’t support pre-processing of data, such as untar/unzip/ungzip/etc. The data must be pre-processed and ready for ingestion in a UTF-8 parseable format