All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. YMMV but with 256G of RAM I would definitely _not_ want any swap at all. I know that: 1) Many  Linux installers create swap space by default whether it's needed or not. 2) There are still some... See more...
OK. YMMV but with 256G of RAM I would definitely _not_ want any swap at all. I know that: 1) Many  Linux installers create swap space by default whether it's needed or not. 2) There are still some myths back from... the eighties(?) circulating around that "you should have twice as much swap as RAM". In your case that would be 0.5TB of swap which - as you will surely admit - would be completely ridiculous. But every use case is different so in Splunk's case I think it's better to fail early and restart than to get your load sky high and wait to crash anyway.
@kiran_panchavat  where to create a new index so that all these Akamai logs to be flow into this new index? Consider we are configuring data inputs in HF...
@zmanaf  Another method is to connect Splunk data to Power BI using the REST API. This approach requires some configuration and understanding of your Splunk data. You can find a general guide on how... See more...
@zmanaf  Another method is to connect Splunk data to Power BI using the REST API. This approach requires some configuration and understanding of your Splunk data. You can find a general guide on how to do this in the https://community.fabric.microsoft.com/t5/Desktop/How-to-Connect-Splunk-data-using-rest-api/td-p/3445195  CData Power BI Connector: CData offers a Power BI Connector for Splunk, which simplifies the process of connecting and visualizing Splunk data in Power BI. You can follow their complete guide https://www.cdata.com/kb/tech/splunk-powerbi-gateway.rst 
If you're using Ingest Processor and SPL2, you can split the flowTuples into individual events. Here's the pipeline config to do so. I have re-used the field names referenced in the other answers to ... See more...
If you're using Ingest Processor and SPL2, you can split the flowTuples into individual events. Here's the pipeline config to do so. I have re-used the field names referenced in the other answers to make the migration easier. Steps: Onboard data like before with the MSCS add-on Create the following pipeline with partitioning set to sourcetype == mscs:nsg:flow2 to avoid conflicting with the INDEXED_EXTRACTIONS in the TA you may have installed already. When creating a pipeline matching a sourcetype Ingest Processor will pull the event out before it's indexed, transform it and send it back into Splunk or your destination of choice:             /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | flatten _raw | expand records | flatten records | fields - records | flatten properties | rename flows AS f1 | expand f1 | flatten f1 | rename flows AS f2 | expand f2 | flatten f2 | expand flowTuples | eval flow_time=mvindex(split(flowTuples,","),0) | eval src_ip=mvindex(split(flowTuples,","),1) | eval dest_ip=mvindex(split(flowTuples,","),2) | eval src_port=mvindex(split(flowTuples,","),3) | eval dest_port=mvindex(split(flowTuples,","),4) | eval transport=mvindex(split(flowTuples,","),5) | eval traffic_flow=mvindex(split(flowTuples,","),6) | eval traffic_result=mvindex(split(flowTuples,","),7) | eval flow_state=mvindex(split(flowTuples,","),8) | eval packets_in=mvindex(split(flowTuples,","),9) | eval bytes_in=mvindex(split(flowTuples,","),10) | eval packets_out=mvindex(split(flowTuples,","),11) | eval bytes_out=mvindex(split(flowTuples,","),12) // Normalization, which could also be done at search-time | eval action=case(traffic_result == "A", "allowed", traffic_result == "D", "blocked") | eval protocol=if(match(src_ip, /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/), "ip", "unknown") | eval direction=case(traffic_flow == "I", "inbound", traffic_flow == "O", "outbound") | eval transport=case(transport == "T", "tcp", transport == "U", "udp") | eval bytes=(coalesce(bytes_in,0)) + (coalesce(bytes_out,0)) | eval packets=(coalesce(packets_in,0)) + (coalesce(packets_out,0)) | fields - flowTuples | eval _raw=json_object("resourceId", resourceId, "category", category, "macAddress", macAddress, "Version", Version, "systemId", systemId, "operationName", operationName, "mac", mac, "rule", rule, "flow_time", flow_time, "src_ip", src_ip, "dest_ip", dest_ip, "src_port", src_port, "dest_port", dest_port, "traffic_flow", traffic_flow, "traffic_result", traffic_result, "bytes_in", bytes_in, "bytes_out", bytes_out, "bytes", bytes, "packets_in", packets_in, "packets_out", packets_out, "packets", packets, "transport", transport, "protocol", protocol, "direction", direction, "action", action) | eval _time=flow_time | fields - flow_state, f1, time, f2, properties, resourceId, category, macAddress, Version, systemId, operationName, mac, rule, flow_time, src_ip, dest_ip, src_port, dest_port, traffic_flow, traffic_result, bytes_in, bytes_out, bytes, packets_in, packets_out, packets, transport, protocol, direction, action | into $destination;               On a side note, Microsoft will be deprecating NSG Flow Logs and replacing them with Virtual Network Flow Logs which has a similar format. Here's the config for Virtual Network Flow Logs with sourcetype mscs:vnet:flow :             /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | flatten _raw | expand records | flatten records | fields - records | rename flowRecords AS f1 | expand f1 | flatten f1 | rename flows AS f2 | expand f2 | flatten f2 | expand flowGroups | flatten flowGroups | expand flowTuples | eval flow_time=mvindex(split(flowTuples,","),0) | eval src_ip=mvindex(split(flowTuples,","),1) | eval dest_ip=mvindex(split(flowTuples,","),2) | eval src_port=mvindex(split(flowTuples,","),3) | eval dest_port=mvindex(split(flowTuples,","),4) | eval transport=mvindex(split(flowTuples,","),5) | eval traffic_flow=mvindex(split(flowTuples,","),6) | eval flow_state=mvindex(split(flowTuples,","),7) | eval flow_encryption=mvindex(split(flowTuples,","),8) | eval packets_in=toint(mvindex(split(flowTuples,","),9)) | eval bytes_in=toint(mvindex(split(flowTuples,","),10)) | eval packets_out=toint(mvindex(split(flowTuples,","),11)) | eval bytes_out=toint(mvindex(split(flowTuples,","),12)) // Normalization, which could also be done at search-time | eval action=case(flow_state == "B", "allowed", flow_state == "D", "blocked", flow_state == "E", "teardown", flow_state == "C", "flow") | eval protocol=if(match(src_ip, /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/), "ip", "unknown") | eval direction=case(traffic_flow == "I", "inbound", traffic_flow == "O", "outbound") | eval bytes=(toint(coalesce(bytes_in,0))) + (toint(coalesce(bytes_out,0))) | eval packets=(toint(coalesce(packets_in,0))) + (toint(coalesce(packets_out,0))) | fields - flowGroups | eval _raw=json_object("record_time", time, "flowLogGUID", flowLogGUID, "flowLogResourceID", flowLogResourceID, "targetResourceId", targetResourceID, "category", category, "macAddress", macAddress, "flowLogVersion", flowLogVersion, "operationName", operationName, "aclID", aclID, "flow_encryption", flow_encryption, "src_ip", src_ip, "dest_ip", dest_ip, "src_port", src_port, "dest_port", dest_port, "traffic_flow", traffic_flow, "bytes_in", bytes_in, "bytes_out", bytes_out, "bytes", bytes, "packets_in", packets_in, "packets_out", packets_out, "packets", packets, "transport", transport, "protocol", protocol, "direction", direction, "action", action) | eval _time = flow_time / 1000 | fields - packets_out, bytes_in, rule, f1, f2, packets, src_ip, targetResourceID, protocol, action, dest_port, aclID, flow_encryption, packets_in, operationName, transport, src_port, flow_state, macAddress, bytes_out, bytes, dest_ip, flowLogVersion, flowLogGUID, category, flowLogResourceID, flowTuples, traffic_flow, direction, time, flow_time | into $destination;              
Hey Buddy , No luck with your command, kindly find logs below :  root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:/opt... See more...
Hey Buddy , No luck with your command, kindly find logs below :  root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:/opt# ls -l /opt/log/ total 204 -rw-r-xr--+ 1 root root 207575 Feb 19 11:12 cisco_ironport_web.log root@hf2:/opt# SplunkD Logs for your refernecne : 03-04-2025 22:23:55.770 +0530 INFO TailingProcessor [32908 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:29:34.873 +0530 INFO TailingProcessor [33197 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:39:22.449 +0530 INFO TailingProcessor [33712 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 13:07:00.440 +0530 INFO TailingProcessor [2920 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:16:28.483 +0530 INFO TailingProcessor [3132 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:18:26.876 +0530 INFO TailingProcessor [3339 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. root@hf2:/opt#
@zmanaf  Please refer this documentation  https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/PowerBI  https://docs.splunk.com/Documentation/ODBC/3.1.1/UseODBC/AboutSplunkODBCDriver 
@zmanaf  You can use this to integrate Power BI with Splunk https://splunkbase.splunk.com/app/1606 
Hi @chenfan , as I said, upgrade your platform using the upgrade path described in the documentation, don't skip any step! Then you can upgrade your apps. Ciao. Giuseppe
Hi @Keith_NZ , at first, please in addition to the screenshots, add also the code and a sample of your logs in text format using the "Add/Edit Code sample" button. Then, if you are doing an extrac... See more...
Hi @Keith_NZ , at first, please in addition to the screenshots, add also the code and a sample of your logs in text format using the "Add/Edit Code sample" button. Then, if you are doing an extraction from _raw you don't need to explicit it in field option. At least, your first rex expressio is almost correct, you have to declare the format of the field (e.g. if it's numeric you have to add \d, something like this, then you have to declare something to define the string to extract as field, e.g. to extract the postCode, you should use: rex "postCode\\\":\\\"(?<postCode>\d+)" in this specific case beware when you have backslashes because to use in Splunk you have to use an additional backslash. Instead isn't correct the last one: | rex field=_raw reg_str because it isn't a field extraction. Ciao. Giuseppe
Hi @gcusello Thank you for your reply! Can I upgrade the platform from version 7.x.x to version 9.3.x and then uniformly upgrade the Apps/Add-ons to their latest versions? Will this have an imp... See more...
Hi @gcusello Thank you for your reply! Can I upgrade the platform from version 7.x.x to version 9.3.x and then uniformly upgrade the Apps/Add-ons to their latest versions? Will this have an impact on my system?
Hi All, I am new to Power BI. My question is, how do we integrate between Splunk and Power BI. Is there an Official guide or manual from Splunk on how we configure this integration for both ? ... See more...
Hi All, I am new to Power BI. My question is, how do we integrate between Splunk and Power BI. Is there an Official guide or manual from Splunk on how we configure this integration for both ? Cheers Zamir
Let me tell you about the exact phenomenon. Splunk Enterprise is currently running two separate categories: search header server and index server. The server environment is as follows. OS versio... See more...
Let me tell you about the exact phenomenon. Splunk Enterprise is currently running two separate categories: search header server and index server. The server environment is as follows. OS version: CentOS 7 Splunk version: 9.0.4 ram: 256G swap: 16G I'm using about 5% of memory on average, but I'm using 100% of swaps.
As several people urged you, please post a complete sample of event, not screen cutouts.  You can sanitize the sample any way you like, but keep quotation marks, commas, curly brackets, square bracke... See more...
As several people urged you, please post a complete sample of event, not screen cutouts.  You can sanitize the sample any way you like, but keep quotation marks, commas, curly brackets, square brackets in exact place. Meanwhile, the cutouts give me enough info to determine that part of the event is JSON.  Here is an experiment for you. | rex "^[^{]+(?<only_json>.+})" | spath input=only_json See if more fields gets out.
On the deployer right and push it to SHs? And where can I configure this?
@KarthikeyaIf a Heavy Forwarder (HF) is not available, install it on the search head.
tanhks , i chaged config  then reolved problem best regard
We have deployment server which receives data from UF. We have cluster manager and deployer and SHs. Where to install and configure this add-on? in DS or Deployer or SHs? Please confirm I am confused... See more...
We have deployment server which receives data from UF. We have cluster manager and deployer and SHs. Where to install and configure this add-on? in DS or Deployer or SHs? Please confirm I am confused. We don't have HF at the moment. Normally where we need to configure data inputs?
Hi All, In SPL2 Ingest Pipeline I want to assemble a regular expression and then use that in a rex command but I am having trouble. For example this simple test I am specifying the regex as a text ... See more...
Hi All, In SPL2 Ingest Pipeline I want to assemble a regular expression and then use that in a rex command but I am having trouble. For example this simple test I am specifying the regex as a text string on the rex command works: But this version doesnt: Any idea what I am doing wrong? Thanks
The search command cannot search for '*'.  The '=' character also is a challenge.  You can, however, use regex to filter on these and other "special" characters. | eval msxxxt="*Action=Gexxxxdledxxx... See more...
The search command cannot search for '*'.  The '=' character also is a challenge.  You can, however, use regex to filter on these and other "special" characters. | eval msxxxt="*Action=GexxxxdledxxxxReport Duration=853*" | regex "=" | rex "Duration (<?Duration>\d+)" | timechart span=1h avg(Duration) AS avg_response by msxxxt
Hello isoutamo;  Thank you for the links; a lot of useful info. I am not an expert in the area of PKI Certificates etc.  I  have a basic understanding only.  The term leaf certificate was new to me. ... See more...
Hello isoutamo;  Thank you for the links; a lot of useful info. I am not an expert in the area of PKI Certificates etc.  I  have a basic understanding only.  The term leaf certificate was new to me.   Ptrsnk