All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hey guys, the depends thing on dashboards worked for me only when i did this trick. i'm not sure why. mvc.Components.get("default").unset("myToken"); mvc.Components.get("submitted").unset("myToken")... See more...
hey guys, the depends thing on dashboards worked for me only when i did this trick. i'm not sure why. mvc.Components.get("default").unset("myToken"); mvc.Components.get("submitted").unset("myToken");  
Hi, I am trying to render a network of my data using react-viz in the dashboard of my Splunk App . For the past few days, I have been trying various things to get the code to work, but all I see is ... See more...
Hi, I am trying to render a network of my data using react-viz in the dashboard of my Splunk App . For the past few days, I have been trying various things to get the code to work, but all I see is a blank screen. I have pasted my code below. Please let me know if you can identify where I might be going wrong.   network_dashboard.js:             require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { function loadScript(url) { return new Promise((resolve, reject) => { const script = document.createElement('script'); script.src=url; script.onload = resolve; script.onerror = reject; document.head.appendChild(script); }); } function waitForReact() { return new Promise((resolve) => { const checkReact = () => { if (window.React && window.ReactDOM && window.vis) { resolve(); } else { setTimeout(checkReact, 100); } }; checkReact(); }); } Promise.all([ loadScript('https://unpkg.com/react@17/umd/react.production.min.js'), loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js'), loadScript('https://unpkg.com/vis-network/dist/vis-network.min.js') ]) .then(waitForReact) .then(() => { console.log('React, ReactDOM, and vis-network are loaded and available'); initApp(); }) .catch(error => { console.error('Error loading scripts:', error); }); function initApp() { const NetworkPage = () => { const [nodes, setNodes] = React.useState([]); const [edges, setEdges] = React.useState([]); const [loading, setLoading] = React.useState(true); const [clickedEdge, setClickedEdge] = React.useState(null); const [clickedNode, setClickedNode] = React.useState(null); const [showTransparent, setShowTransparent] = React.useState(false); React.useEffect(() => { // Static data for debugging const staticNodes = [ {'id': 1, 'label': 'wininit.exe', 'type': 'process', 'rank': 0}, {'id': 2, 'label': 'services.exe', 'type': 'process', 'rank': 1}, {'id': 3, 'label': 'sysmon.exe', 'type': 'process', 'rank': 2}, {'id': 4, 'label': 'comb-file', 'type': 'file', 'rank': 1, 'nodes': [ 'c:\\windows\\system32\\mmc.exe', 'c:\\mozillafirefox\\firefox.exe', 'c:\\windows\\system32\\cmd.exe', 'c:\\windows\\system32\\dllhost.exe', 'c:\\windows\\system32\\conhost.exe', 'c:\\wireshark\\tshark.exe', 'c:\\confer\\repwmiutils.exe', 'c:\\windows\\system32\\searchprotocolhost.exe', 'c:\\windows\\system32\\searchfilterhost.exe', 'c:\\windows\\system32\\consent.exe', 'c:\\python27\\python.exe', 'c:\\windows\\system32\\audiodg.exe', 'c:\\confer\\repux.exe', 'c:\\windows\\system32\\taskhost.exe' ]}, {'id': 5, 'label': 'c:\\wireshark\\dumpcap.exe', 'type': 'file', 'rank': 1}, {'id': 6, 'label': 'c:\\windows\\system32\\audiodg.exe', 'type': 'file', 'rank': 1} ]; const staticEdges = [ {'source': 1, 'target': 2, 'label': 'procstart', 'alname': null, 'time': '2022-07-19 16:00:17.074477', 'transparent': false}, {'source': 2, 'target': 3, 'label': 'procstart', 'alname': null, 'time': '2022-07-19 16:00:17.531504', 'transparent': false}, {'source': 4, 'target': 3, 'label': 'moduleload', 'alname': null, 'time': '2022-07-19 16:01:03.194938', 'transparent': false}, {'source': 5, 'target': 3, 'label': 'moduleload', 'alname': 'Execution - SysInternals Use', 'time': '2022-07-19 16:01:48.497418', 'transparent': false}, {'source': 6, 'target': 3, 'label': 'moduleload', 'alname': 'Execution - SysInternals Use', 'time': '2022-07-19 16:05:04.581065', 'transparent': false} ]; const sortedEdges = staticEdges.sort((a, b) => new Date(a.time) - new Date(b.time)); const nodesByRank = staticNodes.reduce((acc, node) => { const rank = node.rank || 0; if (!acc[rank]) acc[rank] = []; acc[rank].push(node); return acc; }, {}); const nodePositions = {}; const rankSpacingX = 200; const ySpacing = 100; Object.keys(nodesByRank).forEach(rank => { const nodesInRank = nodesByRank[rank]; nodesInRank.sort((a, b) => { const aEdges = staticEdges.filter(edge => edge.source === a.id || edge.target === a.id); const bEdges = staticEdges.filter(edge => edge.source === b.id || edge.target === b.id); return aEdges.length - bEdges.length; }); const totalNodesInRank = nodesInRank.length; nodesInRank.forEach((node, index) => { nodePositions[node.id] = { x: rank * rankSpacingX, y: index * ySpacing - (totalNodesInRank * ySpacing) / 2, }; }); }); const positionedNodes = staticNodes.map(node => ({ ...node, x: nodePositions[node.id].x, y: nodePositions[node.id].y, })); setNodes(positionedNodes); setEdges(sortedEdges); setLoading(false); }, []); const handleNodeClick = (event) => { const { nodes: clickedNodes } = event; if (clickedNodes.length > 0) { const nodeId = clickedNodes[0]; const clickedNode = nodes.find(node => node.id === nodeId); setClickedNode(clickedNode || null); } }; const handleEdgeClick = (event) => { const { edges: clickedEdges } = event; if (clickedEdges.length > 0) { const edgeId = clickedEdges[0]; const clickedEdge = edges.find(edge => `${edge.source}-${edge.target}` === edgeId); setClickedEdge(clickedEdge || null); } }; const handleClosePopup = () => { setClickedEdge(null); setClickedNode(null); }; const toggleTransparentEdges = () => { setShowTransparent(prevState => !prevState); }; if (loading) { return React.createElement('div', null, 'Loading...'); } const formatFilePath = (filePath) => { const parts = filePath.split('\\'); if (filePath.length > 12 && parts[0] !== 'comb-file') { return `${parts[0]}\\...`; } return filePath; }; const filteredNodes = showTransparent ? nodes : nodes.filter(node => edges.some(edge => (edge.source === node.id || edge.target === node.id) && !edge.transparent) ); const filteredEdges = showTransparent ? edges : edges.filter(edge => !edge.transparent); const options = { layout: { hierarchical: false }, edges: { color: { color: '#000000', highlight: '#ff0000', hover: '#ff0000' }, arrows: { to: { enabled: true, scaleFactor: 1 } }, smooth: { type: 'cubicBezier', roundness: 0.2 }, font: { align: 'top', size: 12 }, }, nodes: { shape: 'dot', size: 20, font: { size: 14, face: 'Arial' }, }, interaction: { dragNodes: true, hover: true, selectConnectedEdges: false, }, physics: { enabled: false, stabilization: { enabled: true, iterations: 300, updateInterval: 50 }, }, }; const graphData = { nodes: filteredNodes.map(node => { let label = node.label; if (node.type === 'file' && node.label !== 'comb-file') { label = formatFilePath(node.label); } return { id: node.id, label: label, title: node.type === 'file' ? node.label : '', x: node.x, y: node.y, shape: node.type === 'process' ? 'circle' : node.type === 'socket' ? 'diamond' : 'box', size: node.type === 'socket' ? 40 : 20, font: { size: node.type === 'socket' ? 10 : 14, vadjust: node.type === 'socket' ? -50 : 0 }, color: { background: node.transparent ? "rgba(151, 194, 252, 0.5)" : "rgb(151, 194, 252)", border: "#2B7CE9", highlight: { background: node.transparent ? "rgba(210, 229, 255, 0.1)" : "#D2E5FF", border: "#2B7CE9" }, }, className: node.transparent && !showTransparent ? 'transparent' : '', }; }), edges: filteredEdges.map(edge => ({ from: edge.source, to: edge.target, label: edge.label, color: edge.alname && edge.transparent ? '#ff9999' : edge.alname ? '#ff0000' : edge.transparent ? '#d3d3d3' : '#000000', id: `${edge.source}-${edge.target}`, font: { size: 12, align: 'horizontal', background: 'white', strokeWidth: 0 }, className: edge.transparent && !showTransparent ? 'transparent' : '', })), }; // Render the network visualization return React.createElement( 'div', { className: 'network-container' }, React.createElement( 'button', { className: 'toggle-button', onClick: toggleTransparentEdges }, showTransparent ? "Hide Transparent Edges" : "Show Transparent Edges" ), React.createElement( 'div', { id: 'network' }, React.createElement(vis.Network, { graph: graphData, options: options, events: { select: handleNodeClick, doubleClick: handleEdgeClick } }) ), clickedNode && React.createElement('div', { className: 'popup' }, React.createElement('button', { onClick: handleClosePopup }, 'Close'), React.createElement('h2', null, `Node: ${clickedNode.label}`), React.createElement('p', null, `Type: ${clickedNode.type}`) ), clickedEdge && React.createElement('div', { className: 'popup' }, React.createElement('button', { onClick: handleClosePopup }, 'Close'), React.createElement('h2', null, `Edge: ${clickedEdge.label}`), React.createElement('p', null, `AL Name: ${clickedEdge.alname || 'N/A'}`) ) ); }; const rootElement = document.getElementById('root'); if (rootElement) { ReactDOM.render(React.createElement(NetworkPage), rootElement); } else { console.error('Root element not found'); } } });             network_dashboard.css:             /* src/components/NetworkPage.css */ .network-container { height: 100vh; width: 100vw; display: flex; justify-content: center; align-items: center; position: relative; } #network-visualization { height: 100%; width: 100%; } /* Toggle button styling */ .toggle-button { /* position: absolute;*/ top: 10px; left: 10px; background-color: #007bff; color: white; border: none; border-radius: 20px; padding: 8px 16px; font-size: 14px; cursor: pointer; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); } .toggle-button:hover { background-color: #0056b3; } /* Popup styling */ .popup { background-color: white; border: 1px solid #ccc; padding: 10px; border-radius: 8px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); font-size: 14px; width: 100%; height: 100%; position: relative; } /* Custom Scrollbar Styles */ .scrollable-popup { max-height: 150px; overflow-y: auto; scrollbar-width: thin; /* Firefox */ scrollbar-color: transparent; /* Firefox */ } .scrollable-popup::-webkit-scrollbar { width: 8px; /* WebKit */ } .scrollable-popup::-webkit-scrollbar-track { background: transparent; /* WebKit */ } .scrollable-popup::-webkit-scrollbar-thumb { background: grey; /* WebKit */ border-radius: 8px; } .scrollable-popup::-webkit-scrollbar-thumb:hover { background: darkgrey; /* WebKit */ } /* Popup edge and node styling */ .popup-edge { border: 2px solid #ff0000; color: #333; } .popup-node { border: 2px solid #007bff; color: #007bff; } .close-button { position: absolute; top: 5px; right: 5px; background: transparent; border: none; font-size: 16px; cursor: pointer; } .close-button:hover { color: red; }               network_dashboard.xml             <dashboard script="network_dashboard.js" stylesheet="network_dashboard.css"> <label>Network Visualization</label> <row> <panel> <html> <div id="root" style="height: 800px;"></div> </html> </panel> </row> </dashboard>              
Just encountered the same issue.  I'm following allow on a Udemy Splunk course.  The instructor is using Windows and it appears that this option is for local Windows Event logs that one would view in... See more...
Just encountered the same issue.  I'm following allow on a Udemy Splunk course.  The instructor is using Windows and it appears that this option is for local Windows Event logs that one would view in Event Viewer (they're not flat text files).  I'm guessing that the option appears only on Windows, as Ubuntu and MacOS (which I'm using) use flat files for logs rather than Windows events, which I assume are in a dB format that Event Viewer parses.  
This site implies the remote.s3.endpoint setting is not needed.  https://blog.arcusdata.io/how-to-set-up-splunk-smart-store-in-aws See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Smar... See more...
This site implies the remote.s3.endpoint setting is not needed.  https://blog.arcusdata.io/how-to-set-up-splunk-smart-store-in-aws See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/SmartStoresecuritystrategies#Authenticate_with_the_remote_storage_service for AWS permissions that must be granted to the role.
Change RF/SF to 1 and the CM will not complain about missing nodes.
Yes, removing an app from the server class will cause the client to uninstall that app.
If you simply want to find out when a host started sending data to index you simply need to find min(_time). | tstats min(_time) where index=something earliest=1 by host Two caveats 1) It's based ... See more...
If you simply want to find out when a host started sending data to index you simply need to find min(_time). | tstats min(_time) where index=something earliest=1 by host Two caveats 1) It's based on the _time field so if you've ingested a backlog of 3 years worth of data right after deploying your forwarder, your results will probably not be true. I don't remember if you can use _indextime in tstats. You have to check. 2) It will of course only show data from the buckets which haven't yet rolled to frozen so for old data it will not be true
There are more or less three ways of going about it. 1. Freezing the data to external storage instead of removing it - the downside is that you have to thaw the data if you ever want to use it again... See more...
There are more or less three ways of going about it. 1. Freezing the data to external storage instead of removing it - the downside is that you have to thaw the data if you ever want to use it again. 2. Simply stop your server and copy out the indexed data from the buckets - it uses much more space but you can copy those buckets back into index directory and you're ready to go (unless you forget about retention periods and your data immediately rolls to frozen ;-)) 3. Bend over backwards and run a bunch of searches exporting your data to some csv or json. The upside is that you can use such export with other tools (probably after some processing) but the downside is that you won't be able to use it again with Splunk without additional magic and reingesting it into index.
Sorry for the punctuation problem. What I simply need is just to create a search that say for every host tell me how many days it's been talking to the index X, and use a command that does a span=1d.... See more...
Sorry for the punctuation problem. What I simply need is just to create a search that say for every host tell me how many days it's been talking to the index X, and use a command that does a span=1d. The heartbeat to monitor the hosts to make sure they are going to the right place. This often done by sending or tracking periodic signals from the host to Splunk.  Thanks for your help and efforts.
Adding to @dural_yyz 's answer - your question seems to not be Splunk related but rather connected with your source system which might or might not be able to produce required logs. If you're not get... See more...
Adding to @dural_yyz 's answer - your question seems to not be Splunk related but rather connected with your source system which might or might not be able to produce required logs. If you're not getting logs into Splunk, assuming that the intermediate sc4s is working in general because it sends other logs, there are two posibilities - either your sc4s is misconfigured and doesn't send the data properly (but to troubleshoot that you'd need to be absolutely sure that sc4s is getting relevant events from the source; did you verify it?) or your source is not sending the desired data (and this is something you need to resolve on the source side).
Hi,  I downloaded the mac intel version 4.2.1 of the app to use numpy and pandas. I copied over exec_anaconda.py as per the README and also util.py (exec_anaconda.py uses it), added a test script wi... See more...
Hi,  I downloaded the mac intel version 4.2.1 of the app to use numpy and pandas. I copied over exec_anaconda.py as per the README and also util.py (exec_anaconda.py uses it), added a test script with the preamble mentioned in the README:   #!/usr/bin/python import exec_anaconda exec_anaconda.exec_anaconda() import pandas as pd import sys print (sys.path)   This runs but triggers mac security alerts for a whole bunch of files (easily more than 25 and some need multiple clicks). I have "Allow applications downloaded from App store and identified developers" in my security settings.  Given that this package is from Splunk, can Splunk codesign it  (or whatever else is needed) so it is marked as from an identified developer? Or is there a setting I can use to turn off the warnings for everything from a single tar.gz or everything under a folder etc? I'm on mac Sonoma 14.6 running Splunk 9.2.2  Thanks
I have no idea why it printed this. And it's surprising, that adding  | spath json.msg output=msg | spath input=msg query{}   in this context yields different results than if I place it after my q... See more...
I have no idea why it printed this. And it's surprising, that adding  | spath json.msg output=msg | spath input=msg query{}   in this context yields different results than if I place it after my query in our system. And that the query for some reason also extract query[] part of json(in your context, not in mine). Why? Who asked for that? But even then I still cannot access 'parsed' field named 'batch'... I think the query is some generic function doing some guess-what-is-important extractions. I've got an idea, which will force us to avoid clever functions doing random data extractions. Can you please show me, how could I transform this little json sample into string: select * from whatever.whatever w where w.whatever in (1,2,3) Equivalent for this in jq would look like: jq -r '.json.msg|fromjson|.query[0] as $q| .params[]|reduce .[] as $param ($q; sub("\\?";$param))' full bash command including input being: echo '{"time":"2024-09-19T08:03:02.234663252Z","json":{"ts":"2024-09-19T15:03:02.234462341+07:00","logger":"<anonymized>","level":"WARN","class":"net.ttddyy.dsproxy.support.SLF4JLogUtils","method":"writeLog","file":"<anonymized>","line":26,"thread":"pool-1-thread-1","arguments":{},"msg":"{\"name\":\"\", \"connection\":22234743, \"time\":20000, \"success\":false, \"type\":\"Prepared\", \"batch\":false, \"querySize\":1, \"batchSize\":0, \"query\":[\"select * from whatever.whatever w where w.whatever in (?,?,?) \"], \"params\":[[\"1\",\"2\",\"3\"]]}","scope":"APP"},"kubernetes":{"pod_name":"<anonymized>","namespace_name":"<anonymized>","labels":{"whatever":"whatever"},"container_image":"<anonymized>"}}' | jq -r '.json.msg|fromjson|.query[0] as $q| .params[]|reduce .[] as $param ($q; sub("\\?";$param))';  
You could attach your props to some wildcarded host or source stanza but that's something I'd be very careful about. It's a very non-obvious configuration and can be a huge pain to debug issues.
Ok. Firstly, invest in some punctuation, please, because this stream of conciousness is dificult to read. Secondly, what are you spinning up? You mention server classes so I suspect you're talking a... See more...
Ok. Firstly, invest in some punctuation, please, because this stream of conciousness is dificult to read. Secondly, what are you spinning up? You mention server classes so I suspect you're talking about creating some (virtual? Doesn't matter really) machines with a pre-installed UF. And now what? That UF contains some pre-defined setting, especially including outputs.conf? If it does, then what do you want do "heartbeat"? It's gonna be sending its own internal logs anyway. It is also a fairly typical practice to distribute with your UF a kind of a "starter pack" of standard apps containing common configuration items (like DS address, outputs.conf and such) and generally accept all hosts to a serverclass distributing current versions of those apps. So what heartbeat do you want?
I dont want to remove the overlay. I only want to remove the number 10. 
This is confusing because your search specifically sets the values you want to remove.  Simple solution is to remove the last pipe and eval of the additional field.  Assuming you need that for some a... See more...
This is confusing because your search specifically sets the values you want to remove.  Simple solution is to remove the last pipe and eval of the additional field.  Assuming you need that for some alternate reason then I would recommend. 1) Create base search "ds_base" don't include the pipe and eval of the overlay 2) Create the viz and map the data source to the base search 3) Create a chain search which has the pipe and eval of the overlay field and map it to the base 4) Map the alternate need to the chain search as the source
Currently the only event is an onClick type trigger as far as I can see in the documentation. https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/WhatNew Since it appears what you want to... See more...
Currently the only event is an onClick type trigger as far as I can see in the documentation. https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/WhatNew Since it appears what you want to do is trigger a search and then wait you might want to look into the recently added Submit button options which should allow you to trigger a data source research on demand.
Yes..... Is there a way to implement masking globally?  If not, I assume we to add each sourcetype in props.
Hi Splunkers, I have a question and I need help from experts, I'm working on creating a heartbeat tracker search that monitor when a host gets span up, and it's a window or Linux it gets generic app... See more...
Hi Splunkers, I have a question and I need help from experts, I'm working on creating a heartbeat tracker search that monitor when a host gets span up, and it's a window or Linux it gets generic apps from the server class, so there is a server class built out there that is just looking for any host that isn't already in the server class. So the purpose of the heartbeat tracker is to inform us that there is a brand-new host that isn't in the server class, so the ask is to track the hosts that showing up in the heartbeat index and if these hosts are there for multiple days that means they need to be addressed, as an example every host that get span up whether we know about it or not is going to get the heartbeat initially, so it's going to span up, and it's going to get the heartbeat and once it's get to its real app it's going to stop sending logs to the heartbeat index, so what I really want to know is per host how many days has it been talking to the X index so if I get a host that has been talking to the X index for several days then I know that isn't the initial start up, it's a problem that need to be looked at. | tstats count where index=X by host index span=1d _time