All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

SPL can present a steeper learning curve compared with non-streaming languages.  But once you get some basics, it is very rewarding for it gives you so much freedom.  This said, SPL's JSON path notat... See more...
SPL can present a steeper learning curve compared with non-streaming languages.  But once you get some basics, it is very rewarding for it gives you so much freedom.  This said, SPL's JSON path notations need some getting used to.  The JSON functions are actually OK once you understand the notations.  Before I give my suggestions, let's examine your original trial.     | spath input=json.msg output=msg_raw path=json.msg     This will not give you desired output because in the embedded JSON object in json.msg does not contain a path named json.msg.  The object that does contain this path is _raw.  If you try     | spath ``` input=_raw implied ``` output=msg_raw path=json.msg     you would have extracted a field named msg_raw that duplicates the value of json.msg: json.msg msg_raw {"name":"", "connection":22234743, "time":20000, "success":false, "type":"Prepared", "batch":false, "querySize":1, "batchSize":0, "query":["select * from whatever.whatever w where w.whatever in (?,?,?) "], "params":[["1","2","3"]]} {"name":"", "connection":22234743, "time":20000, "success":false, "type":"Prepared", "batch":false, "querySize":1, "batchSize":0, "query":["select * from whatever.whatever w where w.whatever in (?,?,?) "], "params":[["1","2","3"]]} Of course, this is not what you wanted.  What did we learn here?  That path option in spath goes into the JSON object itself. But if you try     | spath input=json.msg     you will get these fields from json.msg: batch batchSize connection name params{}{} querySize query{} success time type false 0 22234743   1 2 3 1 select * from whatever.whatever w where w.whatever in (?,?,?) false 20000 Prepared What did we learn here?  Place that field name whose value is itself a valid JSON object directly in spath's input option to extract from that field.  Additionally, Splunk uses {} to denote fields extracted from JSON array, and turn them into a multivalue field. In your other comment, you said you want the equivalent of `jq '.json.msg|fromjson|.query[0]'`.  Such would be trivial from the above result.  Add     | eval jq_equivalent = mvindex('params{}{}', 0) | fields params* jq_equivalent     you get params{}{} jq_equivalent 1 2 3 1 What did we learn here?  1. mvindex selects value from a multivalue field (params{}{}), using base 0 index; 2. Use single quote to dereference value of field whose name contains special characters. A word of caution: If all you want from params{}{} is a single multivalue field, the above can be sufficient.  But params[[]] is an array of arrays.  To complicate things, your developer doesn't do you the best of service by throwing in query[] array in the same flat structure.  As the JSON array query can have more than one element, my speculation is that your developer intended for each element in top level array of params to represent params to each element of query[]. What if, instead of     {\"name\":\"\", \"connection\":22234743, \"time\":20000, \"success\":false, \"type\":\"Prepared\", \"batch\":false, \"querySize\":1, \"batchSize\":0, \"query\":[\"select * from whatever.whatever w where w.whatever in (?,?,?) \"], \"params\":[[\"1\",\"2\",\"3\"]]}     your raw data contains json.msg of this value?     "{\"name\":\"\", \"connection\":22234743, \"time\":20000, \"success\":false, \"type\":\"Prepared\", \"batch\":false, \"querySize\":2, \"batchSize\":0, \"query\":[\"select * from whatever.whatever w where w.whatever in (?,?,?) \", \"select * from whatever.whatever2 w where w.whatever2 in (?,?) \"], \"params\":[[\"1\",\"2\",\"3\"],[\"4\",\"5\"]]}"     i.e., query[] and params[] each contains two elements? (For convenience, I assume that querySize represents the number of elements in these arrays.  We can live without this external count but why complicate our lives in a tutorial.)  Using the above search, you will find query{} and params{}{} to contain querySize query{} params{}{} 2 select * from whatever.whatever w where w.whatever in (?,?,?) select * from whatever.whatever2 w where w.whatever2 in (?,?) 1 2 3 4 5 This is one of shortcomings of flattening structured data like JSON, not unique to SPL but the shortcoming becomes more obvious.  On top of the flattened structure, the spath command also cannot handle array of arrays correctly.  Now what? Here is what I would use to get past this barrier. (This is not the only way.  But JSON functions introduced in 8.2 works really well while preserving semantic context.)     | spath input=json.msg | eval params_array = json_array_to_mv(json_extract('json.msg', "params")) | eval idx = mvrange(0, querySize) ``` assuming querySize is size of query{} ``` | eval query_params = mvmap(idx, json_object("query", mvindex('query{}', idx), "params", mvindex(params_array, idx))) | fields - json.msg params* query{} idx | mvexpand query_params     With this, the output contains batch batchSize connection name querySize query_params success time type false 0 22234743   2 {"query":"select * from whatever.whatever w where w.whatever in (?,?,?) ","params":"[\"1\",\"2\",\"3\"]"} false 20000 Prepared false 0 22234743   2 {"query":"select * from whatever.whatever2 w where w.whatever2 in (?,?) ","params":"[\"4\",\"5\"]"} false 20000 Prepared I think you know what I am going for by now.  What did we learn here?  To compensate for the unfortunate implied semantics your developer forces on you, first construct an intermediary JSON object that binds each query with each array of params.  Then, use mvexpand to separate the elements. (Admittedly, json_array_to_mv is an oddball function at first glance.  But once you understand how Splunk uses multivalue, you'll get used to the concept.  Hopefully you will find many merits of using a multivalue representation.) From here, you can use spath again to get desired results, but I find JSON functions to be simpler AND more semantic considering there are only two keys in this intermediary JSON.  Add the following to the above     | eval query = json_extract(query_params, "query") | eval params = json_array_to_mv(json_extract(query_params, "params"))     With this, you get the final result batch batchSize connection name params query querySize success time type false 0 22234743   1 2 3 select * from whatever.whatever w where w.whatever in (?,?,?) 2 false 20000 Prepared false 0 22234743   4 5 select * from whatever.whatever2 w where w.whatever2 in (?,?) 2 false 20000 Prepared Hope this is a useful format for your further processing. Below is an emulation of the above 2-query mock data that I adapted from @ITWhisperer's original emulation.  Play with it and compare with real data.     | makeresults | eval _raw="{ \"time\": \"2024-09-19T08:03:02.234663252Z\", \"json\": { \"ts\": \"2024-09-19T15:03:02.234462341+07:00\", \"logger\": \"<anonymized>\", \"level\": \"WARN\", \"class\": \"net.ttddyy.dsproxy.support.SLF4JLogUtils\", \"method\": \"writeLog\", \"file\": \"<anonymized>\", \"line\": 26, \"thread\": \"pool-1-thread-1\", \"arguments\": {}, \"msg\": \"{\\\"name\\\":\\\"\\\", \\\"connection\\\":22234743, \\\"time\\\":20000, \\\"success\\\":false, \\\"type\\\":\\\"Prepared\\\", \\\"batch\\\":false, \\\"querySize\\\":2, \\\"batchSize\\\":0, \\\"query\\\":[\\\"select * from whatever.whatever w where w.whatever in (?,?,?) \\\", \\\"select * from whatever.whatever2 w where w.whatever2 in (?,?) \\\"], \\\"params\\\":[[\\\"1\\\",\\\"2\\\",\\\"3\\\"],[\\\"4\\\",\\\"5\\\"]]}\", \"scope\": \"APP\" }, \"kubernetes\": { \"pod_name\": \"<anonymized>\", \"namespace_name\": \"<anonymized>\", \"labels\": { \"whatever\": \"whatever\" }, \"container_image\": \"<anonymized>\" } }" | spath ``` data emulation ```     Hope this helps.
Hi,   Join is not returning the data with subsearch, I tried many options from other answers but nothing working out. Target is to check how many departments are using latest version of some so... See more...
Hi,   Join is not returning the data with subsearch, I tried many options from other answers but nothing working out. Target is to check how many departments are using latest version of some software compare to all older versions together.    My search query index=abc version!="2.0" | dedup version thumb_print | stats count(thumb_print) as OLD_RUNS by department | join department [search index=abc version="2.0" | dedup version thumb_print | stats count(thumb_print) as NEW_RUNS by department ] | eval total=OLD_RUNS + NEW_RUNS| fillnull value=0 | eval perc=((NEW_RUNS/total)*100) | eval department=substr(department, 1, 50) | eval perc=round(perc, 2) | table department OLD_RUNS NEW_RUNS perc | sort -perc Overall this search over 1 week time period expected to return more than 100k events. 
Is there any step or checklist for me to first step check or tshoot regarding this, I just currious why the logs is stop ingesting to splunk because previously I don’t have any issue using this way. 
I did not write the logs into the file because lack of resource. 
Hi @gcusello  Thankyou for your answer, I did not install any add ons for fortinet.  sure, I have 1 SH and 2 indexer actualy but I only ingest the log to 1 indexer. The others log from another serv... See more...
Hi @gcusello  Thankyou for your answer, I did not install any add ons for fortinet.  sure, I have 1 SH and 2 indexer actualy but I only ingest the log to 1 indexer. The others log from another service are ingest correctly and can be search in SH. 
In our App/Add-on  python code we need access to Python library which allows to encode and decode JSON Web Tokens (JWT).  Currently we packaged cffi and PyJWT under lib with necessary  cffi backend r... See more...
In our App/Add-on  python code we need access to Python library which allows to encode and decode JSON Web Tokens (JWT).  Currently we packaged cffi and PyJWT under lib with necessary  cffi backend required for each OS.  I.e  for linux : _cffi_backend.cpython-37m-x86_64-linux-gnu.so and for Windows :  _cffi_backend.cp37-win_amd64.pyd.    This worked until recently.  where we updated the Add-on Splunk-sdk-python to 2.0.2 and the Add-on started failing on Splunk Cloud environment.   Error: No module named '_cffi_backend'.  What OS and version is running the splunk cloud? and Is there any way to invoke  python library install command 'pip install pyjwt' while add-on install ? 
Thanks Giuseppe - that worked for the single value! I'm pretty sure I had tried it already, but I was probably trying to over-engineer it.  Cheers
Thanks heaps! I knew it was going to be something simple like that.  Appreciate your help. Cheers
hey guys, the depends thing on dashboards worked for me only when i did this trick. i'm not sure why. mvc.Components.get("default").unset("myToken"); mvc.Components.get("submitted").unset("myToken")... See more...
hey guys, the depends thing on dashboards worked for me only when i did this trick. i'm not sure why. mvc.Components.get("default").unset("myToken"); mvc.Components.get("submitted").unset("myToken");  
Hi, I am trying to render a network of my data using react-viz in the dashboard of my Splunk App . For the past few days, I have been trying various things to get the code to work, but all I see is ... See more...
Hi, I am trying to render a network of my data using react-viz in the dashboard of my Splunk App . For the past few days, I have been trying various things to get the code to work, but all I see is a blank screen. I have pasted my code below. Please let me know if you can identify where I might be going wrong.   network_dashboard.js:             require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { function loadScript(url) { return new Promise((resolve, reject) => { const script = document.createElement('script'); script.src=url; script.onload = resolve; script.onerror = reject; document.head.appendChild(script); }); } function waitForReact() { return new Promise((resolve) => { const checkReact = () => { if (window.React && window.ReactDOM && window.vis) { resolve(); } else { setTimeout(checkReact, 100); } }; checkReact(); }); } Promise.all([ loadScript('https://unpkg.com/react@17/umd/react.production.min.js'), loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js'), loadScript('https://unpkg.com/vis-network/dist/vis-network.min.js') ]) .then(waitForReact) .then(() => { console.log('React, ReactDOM, and vis-network are loaded and available'); initApp(); }) .catch(error => { console.error('Error loading scripts:', error); }); function initApp() { const NetworkPage = () => { const [nodes, setNodes] = React.useState([]); const [edges, setEdges] = React.useState([]); const [loading, setLoading] = React.useState(true); const [clickedEdge, setClickedEdge] = React.useState(null); const [clickedNode, setClickedNode] = React.useState(null); const [showTransparent, setShowTransparent] = React.useState(false); React.useEffect(() => { // Static data for debugging const staticNodes = [ {'id': 1, 'label': 'wininit.exe', 'type': 'process', 'rank': 0}, {'id': 2, 'label': 'services.exe', 'type': 'process', 'rank': 1}, {'id': 3, 'label': 'sysmon.exe', 'type': 'process', 'rank': 2}, {'id': 4, 'label': 'comb-file', 'type': 'file', 'rank': 1, 'nodes': [ 'c:\\windows\\system32\\mmc.exe', 'c:\\mozillafirefox\\firefox.exe', 'c:\\windows\\system32\\cmd.exe', 'c:\\windows\\system32\\dllhost.exe', 'c:\\windows\\system32\\conhost.exe', 'c:\\wireshark\\tshark.exe', 'c:\\confer\\repwmiutils.exe', 'c:\\windows\\system32\\searchprotocolhost.exe', 'c:\\windows\\system32\\searchfilterhost.exe', 'c:\\windows\\system32\\consent.exe', 'c:\\python27\\python.exe', 'c:\\windows\\system32\\audiodg.exe', 'c:\\confer\\repux.exe', 'c:\\windows\\system32\\taskhost.exe' ]}, {'id': 5, 'label': 'c:\\wireshark\\dumpcap.exe', 'type': 'file', 'rank': 1}, {'id': 6, 'label': 'c:\\windows\\system32\\audiodg.exe', 'type': 'file', 'rank': 1} ]; const staticEdges = [ {'source': 1, 'target': 2, 'label': 'procstart', 'alname': null, 'time': '2022-07-19 16:00:17.074477', 'transparent': false}, {'source': 2, 'target': 3, 'label': 'procstart', 'alname': null, 'time': '2022-07-19 16:00:17.531504', 'transparent': false}, {'source': 4, 'target': 3, 'label': 'moduleload', 'alname': null, 'time': '2022-07-19 16:01:03.194938', 'transparent': false}, {'source': 5, 'target': 3, 'label': 'moduleload', 'alname': 'Execution - SysInternals Use', 'time': '2022-07-19 16:01:48.497418', 'transparent': false}, {'source': 6, 'target': 3, 'label': 'moduleload', 'alname': 'Execution - SysInternals Use', 'time': '2022-07-19 16:05:04.581065', 'transparent': false} ]; const sortedEdges = staticEdges.sort((a, b) => new Date(a.time) - new Date(b.time)); const nodesByRank = staticNodes.reduce((acc, node) => { const rank = node.rank || 0; if (!acc[rank]) acc[rank] = []; acc[rank].push(node); return acc; }, {}); const nodePositions = {}; const rankSpacingX = 200; const ySpacing = 100; Object.keys(nodesByRank).forEach(rank => { const nodesInRank = nodesByRank[rank]; nodesInRank.sort((a, b) => { const aEdges = staticEdges.filter(edge => edge.source === a.id || edge.target === a.id); const bEdges = staticEdges.filter(edge => edge.source === b.id || edge.target === b.id); return aEdges.length - bEdges.length; }); const totalNodesInRank = nodesInRank.length; nodesInRank.forEach((node, index) => { nodePositions[node.id] = { x: rank * rankSpacingX, y: index * ySpacing - (totalNodesInRank * ySpacing) / 2, }; }); }); const positionedNodes = staticNodes.map(node => ({ ...node, x: nodePositions[node.id].x, y: nodePositions[node.id].y, })); setNodes(positionedNodes); setEdges(sortedEdges); setLoading(false); }, []); const handleNodeClick = (event) => { const { nodes: clickedNodes } = event; if (clickedNodes.length > 0) { const nodeId = clickedNodes[0]; const clickedNode = nodes.find(node => node.id === nodeId); setClickedNode(clickedNode || null); } }; const handleEdgeClick = (event) => { const { edges: clickedEdges } = event; if (clickedEdges.length > 0) { const edgeId = clickedEdges[0]; const clickedEdge = edges.find(edge => `${edge.source}-${edge.target}` === edgeId); setClickedEdge(clickedEdge || null); } }; const handleClosePopup = () => { setClickedEdge(null); setClickedNode(null); }; const toggleTransparentEdges = () => { setShowTransparent(prevState => !prevState); }; if (loading) { return React.createElement('div', null, 'Loading...'); } const formatFilePath = (filePath) => { const parts = filePath.split('\\'); if (filePath.length > 12 && parts[0] !== 'comb-file') { return `${parts[0]}\\...`; } return filePath; }; const filteredNodes = showTransparent ? nodes : nodes.filter(node => edges.some(edge => (edge.source === node.id || edge.target === node.id) && !edge.transparent) ); const filteredEdges = showTransparent ? edges : edges.filter(edge => !edge.transparent); const options = { layout: { hierarchical: false }, edges: { color: { color: '#000000', highlight: '#ff0000', hover: '#ff0000' }, arrows: { to: { enabled: true, scaleFactor: 1 } }, smooth: { type: 'cubicBezier', roundness: 0.2 }, font: { align: 'top', size: 12 }, }, nodes: { shape: 'dot', size: 20, font: { size: 14, face: 'Arial' }, }, interaction: { dragNodes: true, hover: true, selectConnectedEdges: false, }, physics: { enabled: false, stabilization: { enabled: true, iterations: 300, updateInterval: 50 }, }, }; const graphData = { nodes: filteredNodes.map(node => { let label = node.label; if (node.type === 'file' && node.label !== 'comb-file') { label = formatFilePath(node.label); } return { id: node.id, label: label, title: node.type === 'file' ? node.label : '', x: node.x, y: node.y, shape: node.type === 'process' ? 'circle' : node.type === 'socket' ? 'diamond' : 'box', size: node.type === 'socket' ? 40 : 20, font: { size: node.type === 'socket' ? 10 : 14, vadjust: node.type === 'socket' ? -50 : 0 }, color: { background: node.transparent ? "rgba(151, 194, 252, 0.5)" : "rgb(151, 194, 252)", border: "#2B7CE9", highlight: { background: node.transparent ? "rgba(210, 229, 255, 0.1)" : "#D2E5FF", border: "#2B7CE9" }, }, className: node.transparent && !showTransparent ? 'transparent' : '', }; }), edges: filteredEdges.map(edge => ({ from: edge.source, to: edge.target, label: edge.label, color: edge.alname && edge.transparent ? '#ff9999' : edge.alname ? '#ff0000' : edge.transparent ? '#d3d3d3' : '#000000', id: `${edge.source}-${edge.target}`, font: { size: 12, align: 'horizontal', background: 'white', strokeWidth: 0 }, className: edge.transparent && !showTransparent ? 'transparent' : '', })), }; // Render the network visualization return React.createElement( 'div', { className: 'network-container' }, React.createElement( 'button', { className: 'toggle-button', onClick: toggleTransparentEdges }, showTransparent ? "Hide Transparent Edges" : "Show Transparent Edges" ), React.createElement( 'div', { id: 'network' }, React.createElement(vis.Network, { graph: graphData, options: options, events: { select: handleNodeClick, doubleClick: handleEdgeClick } }) ), clickedNode && React.createElement('div', { className: 'popup' }, React.createElement('button', { onClick: handleClosePopup }, 'Close'), React.createElement('h2', null, `Node: ${clickedNode.label}`), React.createElement('p', null, `Type: ${clickedNode.type}`) ), clickedEdge && React.createElement('div', { className: 'popup' }, React.createElement('button', { onClick: handleClosePopup }, 'Close'), React.createElement('h2', null, `Edge: ${clickedEdge.label}`), React.createElement('p', null, `AL Name: ${clickedEdge.alname || 'N/A'}`) ) ); }; const rootElement = document.getElementById('root'); if (rootElement) { ReactDOM.render(React.createElement(NetworkPage), rootElement); } else { console.error('Root element not found'); } } });             network_dashboard.css:             /* src/components/NetworkPage.css */ .network-container { height: 100vh; width: 100vw; display: flex; justify-content: center; align-items: center; position: relative; } #network-visualization { height: 100%; width: 100%; } /* Toggle button styling */ .toggle-button { /* position: absolute;*/ top: 10px; left: 10px; background-color: #007bff; color: white; border: none; border-radius: 20px; padding: 8px 16px; font-size: 14px; cursor: pointer; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); } .toggle-button:hover { background-color: #0056b3; } /* Popup styling */ .popup { background-color: white; border: 1px solid #ccc; padding: 10px; border-radius: 8px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); font-size: 14px; width: 100%; height: 100%; position: relative; } /* Custom Scrollbar Styles */ .scrollable-popup { max-height: 150px; overflow-y: auto; scrollbar-width: thin; /* Firefox */ scrollbar-color: transparent; /* Firefox */ } .scrollable-popup::-webkit-scrollbar { width: 8px; /* WebKit */ } .scrollable-popup::-webkit-scrollbar-track { background: transparent; /* WebKit */ } .scrollable-popup::-webkit-scrollbar-thumb { background: grey; /* WebKit */ border-radius: 8px; } .scrollable-popup::-webkit-scrollbar-thumb:hover { background: darkgrey; /* WebKit */ } /* Popup edge and node styling */ .popup-edge { border: 2px solid #ff0000; color: #333; } .popup-node { border: 2px solid #007bff; color: #007bff; } .close-button { position: absolute; top: 5px; right: 5px; background: transparent; border: none; font-size: 16px; cursor: pointer; } .close-button:hover { color: red; }               network_dashboard.xml             <dashboard script="network_dashboard.js" stylesheet="network_dashboard.css"> <label>Network Visualization</label> <row> <panel> <html> <div id="root" style="height: 800px;"></div> </html> </panel> </row> </dashboard>              
Just encountered the same issue.  I'm following allow on a Udemy Splunk course.  The instructor is using Windows and it appears that this option is for local Windows Event logs that one would view in... See more...
Just encountered the same issue.  I'm following allow on a Udemy Splunk course.  The instructor is using Windows and it appears that this option is for local Windows Event logs that one would view in Event Viewer (they're not flat text files).  I'm guessing that the option appears only on Windows, as Ubuntu and MacOS (which I'm using) use flat files for logs rather than Windows events, which I assume are in a dB format that Event Viewer parses.  
This site implies the remote.s3.endpoint setting is not needed.  https://blog.arcusdata.io/how-to-set-up-splunk-smart-store-in-aws See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Smar... See more...
This site implies the remote.s3.endpoint setting is not needed.  https://blog.arcusdata.io/how-to-set-up-splunk-smart-store-in-aws See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/SmartStoresecuritystrategies#Authenticate_with_the_remote_storage_service for AWS permissions that must be granted to the role.
Change RF/SF to 1 and the CM will not complain about missing nodes.
Yes, removing an app from the server class will cause the client to uninstall that app.
If you simply want to find out when a host started sending data to index you simply need to find min(_time). | tstats min(_time) where index=something earliest=1 by host Two caveats 1) It's based ... See more...
If you simply want to find out when a host started sending data to index you simply need to find min(_time). | tstats min(_time) where index=something earliest=1 by host Two caveats 1) It's based on the _time field so if you've ingested a backlog of 3 years worth of data right after deploying your forwarder, your results will probably not be true. I don't remember if you can use _indextime in tstats. You have to check. 2) It will of course only show data from the buckets which haven't yet rolled to frozen so for old data it will not be true
There are more or less three ways of going about it. 1. Freezing the data to external storage instead of removing it - the downside is that you have to thaw the data if you ever want to use it again... See more...
There are more or less three ways of going about it. 1. Freezing the data to external storage instead of removing it - the downside is that you have to thaw the data if you ever want to use it again. 2. Simply stop your server and copy out the indexed data from the buckets - it uses much more space but you can copy those buckets back into index directory and you're ready to go (unless you forget about retention periods and your data immediately rolls to frozen ;-)) 3. Bend over backwards and run a bunch of searches exporting your data to some csv or json. The upside is that you can use such export with other tools (probably after some processing) but the downside is that you won't be able to use it again with Splunk without additional magic and reingesting it into index.
Sorry for the punctuation problem. What I simply need is just to create a search that say for every host tell me how many days it's been talking to the index X, and use a command that does a span=1d.... See more...
Sorry for the punctuation problem. What I simply need is just to create a search that say for every host tell me how many days it's been talking to the index X, and use a command that does a span=1d. The heartbeat to monitor the hosts to make sure they are going to the right place. This often done by sending or tracking periodic signals from the host to Splunk.  Thanks for your help and efforts.
Adding to @dural_yyz 's answer - your question seems to not be Splunk related but rather connected with your source system which might or might not be able to produce required logs. If you're not get... See more...
Adding to @dural_yyz 's answer - your question seems to not be Splunk related but rather connected with your source system which might or might not be able to produce required logs. If you're not getting logs into Splunk, assuming that the intermediate sc4s is working in general because it sends other logs, there are two posibilities - either your sc4s is misconfigured and doesn't send the data properly (but to troubleshoot that you'd need to be absolutely sure that sc4s is getting relevant events from the source; did you verify it?) or your source is not sending the desired data (and this is something you need to resolve on the source side).
Hi,  I downloaded the mac intel version 4.2.1 of the app to use numpy and pandas. I copied over exec_anaconda.py as per the README and also util.py (exec_anaconda.py uses it), added a test script wi... See more...
Hi,  I downloaded the mac intel version 4.2.1 of the app to use numpy and pandas. I copied over exec_anaconda.py as per the README and also util.py (exec_anaconda.py uses it), added a test script with the preamble mentioned in the README:   #!/usr/bin/python import exec_anaconda exec_anaconda.exec_anaconda() import pandas as pd import sys print (sys.path)   This runs but triggers mac security alerts for a whole bunch of files (easily more than 25 and some need multiple clicks). I have "Allow applications downloaded from App store and identified developers" in my security settings.  Given that this package is from Splunk, can Splunk codesign it  (or whatever else is needed) so it is marked as from an identified developer? Or is there a setting I can use to turn off the warnings for everything from a single tar.gz or everything under a folder etc? I'm on mac Sonoma 14.6 running Splunk 9.2.2  Thanks
I have no idea why it printed this. And it's surprising, that adding  | spath json.msg output=msg | spath input=msg query{}   in this context yields different results than if I place it after my q... See more...
I have no idea why it printed this. And it's surprising, that adding  | spath json.msg output=msg | spath input=msg query{}   in this context yields different results than if I place it after my query in our system. And that the query for some reason also extract query[] part of json(in your context, not in mine). Why? Who asked for that? But even then I still cannot access 'parsed' field named 'batch'... I think the query is some generic function doing some guess-what-is-important extractions. I've got an idea, which will force us to avoid clever functions doing random data extractions. Can you please show me, how could I transform this little json sample into string: select * from whatever.whatever w where w.whatever in (1,2,3) Equivalent for this in jq would look like: jq -r '.json.msg|fromjson|.query[0] as $q| .params[]|reduce .[] as $param ($q; sub("\\?";$param))' full bash command including input being: echo '{"time":"2024-09-19T08:03:02.234663252Z","json":{"ts":"2024-09-19T15:03:02.234462341+07:00","logger":"<anonymized>","level":"WARN","class":"net.ttddyy.dsproxy.support.SLF4JLogUtils","method":"writeLog","file":"<anonymized>","line":26,"thread":"pool-1-thread-1","arguments":{},"msg":"{\"name\":\"\", \"connection\":22234743, \"time\":20000, \"success\":false, \"type\":\"Prepared\", \"batch\":false, \"querySize\":1, \"batchSize\":0, \"query\":[\"select * from whatever.whatever w where w.whatever in (?,?,?) \"], \"params\":[[\"1\",\"2\",\"3\"]]}","scope":"APP"},"kubernetes":{"pod_name":"<anonymized>","namespace_name":"<anonymized>","labels":{"whatever":"whatever"},"container_image":"<anonymized>"}}' | jq -r '.json.msg|fromjson|.query[0] as $q| .params[]|reduce .[] as $param ($q; sub("\\?";$param))';