All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you are using Classic/SimpleXML dashboards, you can do this with CSS. For this you need to give your panel an id (so it gets tagged so CSS can select it), then you need to know the order of the s... See more...
If you are using Classic/SimpleXML dashboards, you can do this with CSS. For this you need to give your panel an id (so it gets tagged so CSS can select it), then you need to know the order of the series in the charts and they are numbered. For example, if you name your panel "panel_one", and your Total was the second series (index 1), you could do something like this <panel id="panel_one"> <html depends="$alwaysHide$"> <style> #panel_one svg g.highcharts-data-labels.highcharts-series-1 { display: none !important; } </style> </html> <chart>
Thank you below is splunkd.log    09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_HttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Sh... See more...
Thank you below is splunkd.log    09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_HttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down name="HttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_DmcProxyHttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_Duo2FAHttpClient" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_S3ConnectionPoolManager" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down name="S3ConnectionPoolManager" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down level="ShutdownLevel_AwsSdk" 09-20-2024 06:36:54.626 +0000 INFO Shutdown [2498 Shutdown] - shutting down name="loader" 09-20-2024 06:36:54.628 +0000 INFO Shutdown [2498 Shutdown] - Shutdown complete in 5.124 seconds 09-20-2024 06:36:54.629 +0000 INFO loader [2296 MainThread] - All pipelines finished.
Sorry, made a mistake with the calculation of totals. I adjusted the search in my previous answer.
Hi Paul, Thanks for help.  But this still has some issues. Output department                     OLD_RUNS    NEW_RUNS   total  PERC ---------------------------------------------------------... See more...
Hi Paul, Thanks for help.  But this still has some issues. Output department                     OLD_RUNS    NEW_RUNS   total  PERC -------------------------------------------------------------------------- Department1                    10           0                 10  0% Department1                     0            20               20  100% Basically old and new count of same department not in same row so with respect to new runs all percentage comes as 100% as old runs shows as 0.   
  Subsearches are limited to 50k events which is one of the issues with using joins. Also, your dedup seems to ignore whether more than one department has the same version and thumb_print (of course... See more...
  Subsearches are limited to 50k events which is one of the issues with using joins. Also, your dedup seems to ignore whether more than one department has the same version and thumb_print (of course, unless thumb_prints or versions are unique to department). Try something like this index=abc | dedup version thumb_print department | eval version=if(version="2.0","NEW_RUNS","OLD_RUNS") | chart count(thumb_print) by department version | fillnull value=0 | eval total=NEW_RUNS+OLD_RUNS | eval perc=round(100*NEW_RUNS/total,2) | eval department=substr(department, 1, 50) | table department OLD_RUNS NEW_RUNS perc | sort -perc
Forget the rest of search.  What do you get from the following?   index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" | lookup file.csv cidr AS sourceip OUTPUT provider AS sourceprovider, area AS s... See more...
Forget the rest of search.  What do you get from the following?   index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" | lookup file.csv cidr AS sourceip OUTPUT provider AS sourceprovider, area AS sourcearea, zone AS sourcezone , region AS sourceregion, cidr AS src_cidr | lookup file.csv cidr AS destip OUTPUT provider AS destprovider, area AS destarea, zone AS destzone, region AS destregion, cidr AS dest_cidr | table sourceip sourceprovider sourcearea sourcezone sourceregion src_cidr destip destprovider destarea destzone destregion dest_cidr   Is the output correct? Using your mock lookup data, I made the following emulation   | makeresults format=csv data="sourceip, destip 1.1.1.116,10.5.5.5 10.0.0.5,2.2.2.3 2.2.2.8, 1.1.1.90 192.168.8.1,10.6.0.10" ``` the above emulates index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" ``` | lookup file.csv cidr AS sourceip OUTPUT provider AS sourceprovider, area AS sourcearea, zone AS sourcezone , region AS sourceregion, cidr AS src_cidr | lookup file.csv cidr AS destip OUTPUT provider AS destprovider, area AS destarea, zone AS destzone, region AS destregion, cidr AS dest_cidr | fields sourceip sourceprovider sourcearea sourcezone sourceregion src_cidr destip destprovider destarea destzone destregion dest_cidr   This is what I get, exactly as expected sourceip sourceprovider sourcearea sourcezone sourceregion src_cidr destip destprovider destarea destzone destregion dest_cidr 1.1.1.116 Unit 1 Finance 2 1.1.1.1/24 10.5.5.5           10.0.0.5           2.2.2.3 Unit 2 HR 16 2.2.2.2/27 2.2.2.8 Unit 2 HR 16 2.2.2.2/27 1.1.1.90 Unit 1 Finance 2 1.1.1.1/24 192.168.8.1           10.6.0.10          
index=abc   | dedup version thumb_print | stats count(eval(if(version!="2.0",thumb_print,null()))) as OLD_RUNS count(eval(if(version="2.0",thumb_print,null()))) as NEW_RUNS by department | fillnu... See more...
index=abc   | dedup version thumb_print | stats count(eval(if(version!="2.0",thumb_print,null()))) as OLD_RUNS count(eval(if(version="2.0",thumb_print,null()))) as NEW_RUNS by department | fillnull value=0 | eval total=NEW_RUNS+OLD_RUNS | eval perc=((NEW_RUNS/total)*100) | eval department=substr(department, 1, 50) | eval perc=round(perc, 2) | sort -perc  
If you don't have GUI access to the remote searchhead you must ask your infra team. They should be able to confirm if the custom fields are configured on the remote searchhead.
SPL can present a steeper learning curve compared with non-streaming languages.  But once you get some basics, it is very rewarding for it gives you so much freedom.  This said, SPL's JSON path notat... See more...
SPL can present a steeper learning curve compared with non-streaming languages.  But once you get some basics, it is very rewarding for it gives you so much freedom.  This said, SPL's JSON path notations need some getting used to.  The JSON functions are actually OK once you understand the notations.  Before I give my suggestions, let's examine your original trial.     | spath input=json.msg output=msg_raw path=json.msg     This will not give you desired output because in the embedded JSON object in json.msg does not contain a path named json.msg.  The object that does contain this path is _raw.  If you try     | spath ``` input=_raw implied ``` output=msg_raw path=json.msg     you would have extracted a field named msg_raw that duplicates the value of json.msg: json.msg msg_raw {"name":"", "connection":22234743, "time":20000, "success":false, "type":"Prepared", "batch":false, "querySize":1, "batchSize":0, "query":["select * from whatever.whatever w where w.whatever in (?,?,?) "], "params":[["1","2","3"]]} {"name":"", "connection":22234743, "time":20000, "success":false, "type":"Prepared", "batch":false, "querySize":1, "batchSize":0, "query":["select * from whatever.whatever w where w.whatever in (?,?,?) "], "params":[["1","2","3"]]} Of course, this is not what you wanted.  What did we learn here?  That path option in spath goes into the JSON object itself. But if you try     | spath input=json.msg     you will get these fields from json.msg: batch batchSize connection name params{}{} querySize query{} success time type false 0 22234743   1 2 3 1 select * from whatever.whatever w where w.whatever in (?,?,?) false 20000 Prepared What did we learn here?  Place that field name whose value is itself a valid JSON object directly in spath's input option to extract from that field.  Additionally, Splunk uses {} to denote fields extracted from JSON array, and turn them into a multivalue field. In your other comment, you said you want the equivalent of `jq '.json.msg|fromjson|.query[0]'`.  Such would be trivial from the above result.  Add     | eval jq_equivalent = mvindex('params{}{}', 0) | fields params* jq_equivalent     you get params{}{} jq_equivalent 1 2 3 1 What did we learn here?  1. mvindex selects value from a multivalue field (params{}{}), using base 0 index; 2. Use single quote to dereference value of field whose name contains special characters. A word of caution: If all you want from params{}{} is a single multivalue field, the above can be sufficient.  But params[[]] is an array of arrays.  To complicate things, your developer doesn't do you the best of service by throwing in query[] array in the same flat structure.  As the JSON array query can have more than one element, my speculation is that your developer intended for each element in top level array of params to represent params to each element of query[]. What if, instead of     {\"name\":\"\", \"connection\":22234743, \"time\":20000, \"success\":false, \"type\":\"Prepared\", \"batch\":false, \"querySize\":1, \"batchSize\":0, \"query\":[\"select * from whatever.whatever w where w.whatever in (?,?,?) \"], \"params\":[[\"1\",\"2\",\"3\"]]}     your raw data contains json.msg of this value?     "{\"name\":\"\", \"connection\":22234743, \"time\":20000, \"success\":false, \"type\":\"Prepared\", \"batch\":false, \"querySize\":2, \"batchSize\":0, \"query\":[\"select * from whatever.whatever w where w.whatever in (?,?,?) \", \"select * from whatever.whatever2 w where w.whatever2 in (?,?) \"], \"params\":[[\"1\",\"2\",\"3\"],[\"4\",\"5\"]]}"     i.e., query[] and params[] each contains two elements? (For convenience, I assume that querySize represents the number of elements in these arrays.  We can live without this external count but why complicate our lives in a tutorial.)  Using the above search, you will find query{} and params{}{} to contain querySize query{} params{}{} 2 select * from whatever.whatever w where w.whatever in (?,?,?) select * from whatever.whatever2 w where w.whatever2 in (?,?) 1 2 3 4 5 This is one of shortcomings of flattening structured data like JSON, not unique to SPL but the shortcoming becomes more obvious.  On top of the flattened structure, the spath command also cannot handle array of arrays correctly.  Now what? Here is what I would use to get past this barrier. (This is not the only way.  But JSON functions introduced in 8.2 works really well while preserving semantic context.)     | spath input=json.msg | eval params_array = json_array_to_mv(json_extract('json.msg', "params")) | eval idx = mvrange(0, querySize) ``` assuming querySize is size of query{} ``` | eval query_params = mvmap(idx, json_object("query", mvindex('query{}', idx), "params", mvindex(params_array, idx))) | fields - json.msg params* query{} idx | mvexpand query_params     With this, the output contains batch batchSize connection name querySize query_params success time type false 0 22234743   2 {"query":"select * from whatever.whatever w where w.whatever in (?,?,?) ","params":"[\"1\",\"2\",\"3\"]"} false 20000 Prepared false 0 22234743   2 {"query":"select * from whatever.whatever2 w where w.whatever2 in (?,?) ","params":"[\"4\",\"5\"]"} false 20000 Prepared I think you know what I am going for by now.  What did we learn here?  To compensate for the unfortunate implied semantics your developer forces on you, first construct an intermediary JSON object that binds each query with each array of params.  Then, use mvexpand to separate the elements. (Admittedly, json_array_to_mv is an oddball function at first glance.  But once you understand how Splunk uses multivalue, you'll get used to the concept.  Hopefully you will find many merits of using a multivalue representation.) From here, you can use spath again to get desired results, but I find JSON functions to be simpler AND more semantic considering there are only two keys in this intermediary JSON.  Add the following to the above     | eval query = json_extract(query_params, "query") | eval params = json_array_to_mv(json_extract(query_params, "params"))     With this, you get the final result batch batchSize connection name params query querySize success time type false 0 22234743   1 2 3 select * from whatever.whatever w where w.whatever in (?,?,?) 2 false 20000 Prepared false 0 22234743   4 5 select * from whatever.whatever2 w where w.whatever2 in (?,?) 2 false 20000 Prepared Hope this is a useful format for your further processing. Below is an emulation of the above 2-query mock data that I adapted from @ITWhisperer's original emulation.  Play with it and compare with real data.     | makeresults | eval _raw="{ \"time\": \"2024-09-19T08:03:02.234663252Z\", \"json\": { \"ts\": \"2024-09-19T15:03:02.234462341+07:00\", \"logger\": \"<anonymized>\", \"level\": \"WARN\", \"class\": \"net.ttddyy.dsproxy.support.SLF4JLogUtils\", \"method\": \"writeLog\", \"file\": \"<anonymized>\", \"line\": 26, \"thread\": \"pool-1-thread-1\", \"arguments\": {}, \"msg\": \"{\\\"name\\\":\\\"\\\", \\\"connection\\\":22234743, \\\"time\\\":20000, \\\"success\\\":false, \\\"type\\\":\\\"Prepared\\\", \\\"batch\\\":false, \\\"querySize\\\":2, \\\"batchSize\\\":0, \\\"query\\\":[\\\"select * from whatever.whatever w where w.whatever in (?,?,?) \\\", \\\"select * from whatever.whatever2 w where w.whatever2 in (?,?) \\\"], \\\"params\\\":[[\\\"1\\\",\\\"2\\\",\\\"3\\\"],[\\\"4\\\",\\\"5\\\"]]}\", \"scope\": \"APP\" }, \"kubernetes\": { \"pod_name\": \"<anonymized>\", \"namespace_name\": \"<anonymized>\", \"labels\": { \"whatever\": \"whatever\" }, \"container_image\": \"<anonymized>\" } }" | spath ``` data emulation ```     Hope this helps.
Hi,   Join is not returning the data with subsearch, I tried many options from other answers but nothing working out. Target is to check how many departments are using latest version of some so... See more...
Hi,   Join is not returning the data with subsearch, I tried many options from other answers but nothing working out. Target is to check how many departments are using latest version of some software compare to all older versions together.    My search query index=abc version!="2.0" | dedup version thumb_print | stats count(thumb_print) as OLD_RUNS by department | join department [search index=abc version="2.0" | dedup version thumb_print | stats count(thumb_print) as NEW_RUNS by department ] | eval total=OLD_RUNS + NEW_RUNS| fillnull value=0 | eval perc=((NEW_RUNS/total)*100) | eval department=substr(department, 1, 50) | eval perc=round(perc, 2) | table department OLD_RUNS NEW_RUNS perc | sort -perc Overall this search over 1 week time period expected to return more than 100k events. 
Is there any step or checklist for me to first step check or tshoot regarding this, I just currious why the logs is stop ingesting to splunk because previously I don’t have any issue using this way. 
I did not write the logs into the file because lack of resource. 
Hi @gcusello  Thankyou for your answer, I did not install any add ons for fortinet.  sure, I have 1 SH and 2 indexer actualy but I only ingest the log to 1 indexer. The others log from another serv... See more...
Hi @gcusello  Thankyou for your answer, I did not install any add ons for fortinet.  sure, I have 1 SH and 2 indexer actualy but I only ingest the log to 1 indexer. The others log from another service are ingest correctly and can be search in SH. 
In our App/Add-on  python code we need access to Python library which allows to encode and decode JSON Web Tokens (JWT).  Currently we packaged cffi and PyJWT under lib with necessary  cffi backend r... See more...
In our App/Add-on  python code we need access to Python library which allows to encode and decode JSON Web Tokens (JWT).  Currently we packaged cffi and PyJWT under lib with necessary  cffi backend required for each OS.  I.e  for linux : _cffi_backend.cpython-37m-x86_64-linux-gnu.so and for Windows :  _cffi_backend.cp37-win_amd64.pyd.    This worked until recently.  where we updated the Add-on Splunk-sdk-python to 2.0.2 and the Add-on started failing on Splunk Cloud environment.   Error: No module named '_cffi_backend'.  What OS and version is running the splunk cloud? and Is there any way to invoke  python library install command 'pip install pyjwt' while add-on install ? 
Thanks Giuseppe - that worked for the single value! I'm pretty sure I had tried it already, but I was probably trying to over-engineer it.  Cheers
Thanks heaps! I knew it was going to be something simple like that.  Appreciate your help. Cheers
hey guys, the depends thing on dashboards worked for me only when i did this trick. i'm not sure why. mvc.Components.get("default").unset("myToken"); mvc.Components.get("submitted").unset("myToken")... See more...
hey guys, the depends thing on dashboards worked for me only when i did this trick. i'm not sure why. mvc.Components.get("default").unset("myToken"); mvc.Components.get("submitted").unset("myToken");  
Hi, I am trying to render a network of my data using react-viz in the dashboard of my Splunk App . For the past few days, I have been trying various things to get the code to work, but all I see is ... See more...
Hi, I am trying to render a network of my data using react-viz in the dashboard of my Splunk App . For the past few days, I have been trying various things to get the code to work, but all I see is a blank screen. I have pasted my code below. Please let me know if you can identify where I might be going wrong.   network_dashboard.js:             require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { function loadScript(url) { return new Promise((resolve, reject) => { const script = document.createElement('script'); script.src=url; script.onload = resolve; script.onerror = reject; document.head.appendChild(script); }); } function waitForReact() { return new Promise((resolve) => { const checkReact = () => { if (window.React && window.ReactDOM && window.vis) { resolve(); } else { setTimeout(checkReact, 100); } }; checkReact(); }); } Promise.all([ loadScript('https://unpkg.com/react@17/umd/react.production.min.js'), loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js'), loadScript('https://unpkg.com/vis-network/dist/vis-network.min.js') ]) .then(waitForReact) .then(() => { console.log('React, ReactDOM, and vis-network are loaded and available'); initApp(); }) .catch(error => { console.error('Error loading scripts:', error); }); function initApp() { const NetworkPage = () => { const [nodes, setNodes] = React.useState([]); const [edges, setEdges] = React.useState([]); const [loading, setLoading] = React.useState(true); const [clickedEdge, setClickedEdge] = React.useState(null); const [clickedNode, setClickedNode] = React.useState(null); const [showTransparent, setShowTransparent] = React.useState(false); React.useEffect(() => { // Static data for debugging const staticNodes = [ {'id': 1, 'label': 'wininit.exe', 'type': 'process', 'rank': 0}, {'id': 2, 'label': 'services.exe', 'type': 'process', 'rank': 1}, {'id': 3, 'label': 'sysmon.exe', 'type': 'process', 'rank': 2}, {'id': 4, 'label': 'comb-file', 'type': 'file', 'rank': 1, 'nodes': [ 'c:\\windows\\system32\\mmc.exe', 'c:\\mozillafirefox\\firefox.exe', 'c:\\windows\\system32\\cmd.exe', 'c:\\windows\\system32\\dllhost.exe', 'c:\\windows\\system32\\conhost.exe', 'c:\\wireshark\\tshark.exe', 'c:\\confer\\repwmiutils.exe', 'c:\\windows\\system32\\searchprotocolhost.exe', 'c:\\windows\\system32\\searchfilterhost.exe', 'c:\\windows\\system32\\consent.exe', 'c:\\python27\\python.exe', 'c:\\windows\\system32\\audiodg.exe', 'c:\\confer\\repux.exe', 'c:\\windows\\system32\\taskhost.exe' ]}, {'id': 5, 'label': 'c:\\wireshark\\dumpcap.exe', 'type': 'file', 'rank': 1}, {'id': 6, 'label': 'c:\\windows\\system32\\audiodg.exe', 'type': 'file', 'rank': 1} ]; const staticEdges = [ {'source': 1, 'target': 2, 'label': 'procstart', 'alname': null, 'time': '2022-07-19 16:00:17.074477', 'transparent': false}, {'source': 2, 'target': 3, 'label': 'procstart', 'alname': null, 'time': '2022-07-19 16:00:17.531504', 'transparent': false}, {'source': 4, 'target': 3, 'label': 'moduleload', 'alname': null, 'time': '2022-07-19 16:01:03.194938', 'transparent': false}, {'source': 5, 'target': 3, 'label': 'moduleload', 'alname': 'Execution - SysInternals Use', 'time': '2022-07-19 16:01:48.497418', 'transparent': false}, {'source': 6, 'target': 3, 'label': 'moduleload', 'alname': 'Execution - SysInternals Use', 'time': '2022-07-19 16:05:04.581065', 'transparent': false} ]; const sortedEdges = staticEdges.sort((a, b) => new Date(a.time) - new Date(b.time)); const nodesByRank = staticNodes.reduce((acc, node) => { const rank = node.rank || 0; if (!acc[rank]) acc[rank] = []; acc[rank].push(node); return acc; }, {}); const nodePositions = {}; const rankSpacingX = 200; const ySpacing = 100; Object.keys(nodesByRank).forEach(rank => { const nodesInRank = nodesByRank[rank]; nodesInRank.sort((a, b) => { const aEdges = staticEdges.filter(edge => edge.source === a.id || edge.target === a.id); const bEdges = staticEdges.filter(edge => edge.source === b.id || edge.target === b.id); return aEdges.length - bEdges.length; }); const totalNodesInRank = nodesInRank.length; nodesInRank.forEach((node, index) => { nodePositions[node.id] = { x: rank * rankSpacingX, y: index * ySpacing - (totalNodesInRank * ySpacing) / 2, }; }); }); const positionedNodes = staticNodes.map(node => ({ ...node, x: nodePositions[node.id].x, y: nodePositions[node.id].y, })); setNodes(positionedNodes); setEdges(sortedEdges); setLoading(false); }, []); const handleNodeClick = (event) => { const { nodes: clickedNodes } = event; if (clickedNodes.length > 0) { const nodeId = clickedNodes[0]; const clickedNode = nodes.find(node => node.id === nodeId); setClickedNode(clickedNode || null); } }; const handleEdgeClick = (event) => { const { edges: clickedEdges } = event; if (clickedEdges.length > 0) { const edgeId = clickedEdges[0]; const clickedEdge = edges.find(edge => `${edge.source}-${edge.target}` === edgeId); setClickedEdge(clickedEdge || null); } }; const handleClosePopup = () => { setClickedEdge(null); setClickedNode(null); }; const toggleTransparentEdges = () => { setShowTransparent(prevState => !prevState); }; if (loading) { return React.createElement('div', null, 'Loading...'); } const formatFilePath = (filePath) => { const parts = filePath.split('\\'); if (filePath.length > 12 && parts[0] !== 'comb-file') { return `${parts[0]}\\...`; } return filePath; }; const filteredNodes = showTransparent ? nodes : nodes.filter(node => edges.some(edge => (edge.source === node.id || edge.target === node.id) && !edge.transparent) ); const filteredEdges = showTransparent ? edges : edges.filter(edge => !edge.transparent); const options = { layout: { hierarchical: false }, edges: { color: { color: '#000000', highlight: '#ff0000', hover: '#ff0000' }, arrows: { to: { enabled: true, scaleFactor: 1 } }, smooth: { type: 'cubicBezier', roundness: 0.2 }, font: { align: 'top', size: 12 }, }, nodes: { shape: 'dot', size: 20, font: { size: 14, face: 'Arial' }, }, interaction: { dragNodes: true, hover: true, selectConnectedEdges: false, }, physics: { enabled: false, stabilization: { enabled: true, iterations: 300, updateInterval: 50 }, }, }; const graphData = { nodes: filteredNodes.map(node => { let label = node.label; if (node.type === 'file' && node.label !== 'comb-file') { label = formatFilePath(node.label); } return { id: node.id, label: label, title: node.type === 'file' ? node.label : '', x: node.x, y: node.y, shape: node.type === 'process' ? 'circle' : node.type === 'socket' ? 'diamond' : 'box', size: node.type === 'socket' ? 40 : 20, font: { size: node.type === 'socket' ? 10 : 14, vadjust: node.type === 'socket' ? -50 : 0 }, color: { background: node.transparent ? "rgba(151, 194, 252, 0.5)" : "rgb(151, 194, 252)", border: "#2B7CE9", highlight: { background: node.transparent ? "rgba(210, 229, 255, 0.1)" : "#D2E5FF", border: "#2B7CE9" }, }, className: node.transparent && !showTransparent ? 'transparent' : '', }; }), edges: filteredEdges.map(edge => ({ from: edge.source, to: edge.target, label: edge.label, color: edge.alname && edge.transparent ? '#ff9999' : edge.alname ? '#ff0000' : edge.transparent ? '#d3d3d3' : '#000000', id: `${edge.source}-${edge.target}`, font: { size: 12, align: 'horizontal', background: 'white', strokeWidth: 0 }, className: edge.transparent && !showTransparent ? 'transparent' : '', })), }; // Render the network visualization return React.createElement( 'div', { className: 'network-container' }, React.createElement( 'button', { className: 'toggle-button', onClick: toggleTransparentEdges }, showTransparent ? "Hide Transparent Edges" : "Show Transparent Edges" ), React.createElement( 'div', { id: 'network' }, React.createElement(vis.Network, { graph: graphData, options: options, events: { select: handleNodeClick, doubleClick: handleEdgeClick } }) ), clickedNode && React.createElement('div', { className: 'popup' }, React.createElement('button', { onClick: handleClosePopup }, 'Close'), React.createElement('h2', null, `Node: ${clickedNode.label}`), React.createElement('p', null, `Type: ${clickedNode.type}`) ), clickedEdge && React.createElement('div', { className: 'popup' }, React.createElement('button', { onClick: handleClosePopup }, 'Close'), React.createElement('h2', null, `Edge: ${clickedEdge.label}`), React.createElement('p', null, `AL Name: ${clickedEdge.alname || 'N/A'}`) ) ); }; const rootElement = document.getElementById('root'); if (rootElement) { ReactDOM.render(React.createElement(NetworkPage), rootElement); } else { console.error('Root element not found'); } } });             network_dashboard.css:             /* src/components/NetworkPage.css */ .network-container { height: 100vh; width: 100vw; display: flex; justify-content: center; align-items: center; position: relative; } #network-visualization { height: 100%; width: 100%; } /* Toggle button styling */ .toggle-button { /* position: absolute;*/ top: 10px; left: 10px; background-color: #007bff; color: white; border: none; border-radius: 20px; padding: 8px 16px; font-size: 14px; cursor: pointer; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); } .toggle-button:hover { background-color: #0056b3; } /* Popup styling */ .popup { background-color: white; border: 1px solid #ccc; padding: 10px; border-radius: 8px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); font-size: 14px; width: 100%; height: 100%; position: relative; } /* Custom Scrollbar Styles */ .scrollable-popup { max-height: 150px; overflow-y: auto; scrollbar-width: thin; /* Firefox */ scrollbar-color: transparent; /* Firefox */ } .scrollable-popup::-webkit-scrollbar { width: 8px; /* WebKit */ } .scrollable-popup::-webkit-scrollbar-track { background: transparent; /* WebKit */ } .scrollable-popup::-webkit-scrollbar-thumb { background: grey; /* WebKit */ border-radius: 8px; } .scrollable-popup::-webkit-scrollbar-thumb:hover { background: darkgrey; /* WebKit */ } /* Popup edge and node styling */ .popup-edge { border: 2px solid #ff0000; color: #333; } .popup-node { border: 2px solid #007bff; color: #007bff; } .close-button { position: absolute; top: 5px; right: 5px; background: transparent; border: none; font-size: 16px; cursor: pointer; } .close-button:hover { color: red; }               network_dashboard.xml             <dashboard script="network_dashboard.js" stylesheet="network_dashboard.css"> <label>Network Visualization</label> <row> <panel> <html> <div id="root" style="height: 800px;"></div> </html> </panel> </row> </dashboard>              
Just encountered the same issue.  I'm following allow on a Udemy Splunk course.  The instructor is using Windows and it appears that this option is for local Windows Event logs that one would view in... See more...
Just encountered the same issue.  I'm following allow on a Udemy Splunk course.  The instructor is using Windows and it appears that this option is for local Windows Event logs that one would view in Event Viewer (they're not flat text files).  I'm guessing that the option appears only on Windows, as Ubuntu and MacOS (which I'm using) use flat files for logs rather than Windows events, which I assume are in a dB format that Event Viewer parses.  
This site implies the remote.s3.endpoint setting is not needed.  https://blog.arcusdata.io/how-to-set-up-splunk-smart-store-in-aws See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Smar... See more...
This site implies the remote.s3.endpoint setting is not needed.  https://blog.arcusdata.io/how-to-set-up-splunk-smart-store-in-aws See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/SmartStoresecuritystrategies#Authenticate_with_the_remote_storage_service for AWS permissions that must be granted to the role.