All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'm currently doing some training as part of a SOC analyst intern position. One of the questions in the little exercise our trainer created for us is this (some information has been omitted pu... See more...
Hello, I'm currently doing some training as part of a SOC analyst intern position. One of the questions in the little exercise our trainer created for us is this (some information has been omitted purposely in respect to the organization): How many of each user category authentication attempt exist for all successful authentications?     Would someone be able to assist me with a general start for how I  would write up my search to look for this kind of info?
I am running the following query for a single 24 hour period. I was expecting a single summary row result. Not sure why the result is split across 2 rows. Here's the query: index=federated:license_... See more...
I am running the following query for a single 24 hour period. I was expecting a single summary row result. Not sure why the result is split across 2 rows. Here's the query: index=federated:license_master_internal source=*license_usage.log type=Usage pool=engineering_pool | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | where like(h, "%metfarm%" ) OR like(h, "%scale%") |eval h=rtrim(h,".eng.ssnsgs.net") |eval env=split(h,"-") |eval env=mvindex(env,1) |eval env=if (like(env,"metfarm%"),"metfarm",env) |eval env=if (like(env,"sysperf%"),"95x",env) |eval env=if(like(env,"gs02"),"tscale",env) | timechart span=1d sum(b) as b by env | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] |addtotals
Hi community, I have an AO AG with two nodes, and I have these four IP addresses. 10.10.10.62 (DB 1) 10.10.10.63 (DB 2) 10.10.10.61 (Cluster IP) 10.10.10.60 (AG Listener IP) I want to discover t... See more...
Hi community, I have an AO AG with two nodes, and I have these four IP addresses. 10.10.10.62 (DB 1) 10.10.10.63 (DB 2) 10.10.10.61 (Cluster IP) 10.10.10.60 (AG Listener IP) I want to discover the two nodes automatically. According to the documentation, Configure Microsoft SQL Server Collectors (appdynamics.com) To enable monitoring of all the nodes, you must enable the dbagent.mssql.cluster.discovery.enabled property either at the Controller level or at the agent level. I am running the following: $ nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=DBMon-Agent -jar db-agent.jar -Ddbagent.mssql.cluster.discovery.enabled=true & But it doesn't work when I configure the collector with the AG Listener IP. I also get the below: `Is Failover Cluster Discovery Enabled: False`! I have added dbagent.mssql.cluster.discovery.enabled though!? What could I possibly be doing wrong? Thank you
Is there a way to send all logs data to an NFS file system for required log retention from Splunk Opentelemetry?  
I have a json that looks like this: { "Field1" : [ { "id": 1234 "name": "John" }, { "id": 5678 "name": "Mary" "occupation": { "title": "lawyer", "employer": "law firm" } } ] } I want... See more...
I have a json that looks like this: { "Field1" : [ { "id": 1234 "name": "John" }, { "id": 5678 "name": "Mary" "occupation": { "title": "lawyer", "employer": "law firm" } } ] } I want to extract the value of the "name" field from the object that contains an occupation field (could be any). In this case I want to get "Mary" and store it inside a variable. How would I do this using splunk search language?
I configured a Macro name securemsg(1), I use this Marco in the following search: ....| eval log_info=_raw | 'securemsg(log_info)' | .... When I run this search I got following error: Error in 'Se... See more...
I configured a Macro name securemsg(1), I use this Marco in the following search: ....| eval log_info=_raw | 'securemsg(log_info)' | .... When I run this search I got following error: Error in 'SearchParser': Missing a search command before '''. Error at position '264' of search query 'search index="linuxos" sourcetype="syslog" host="C...{snipped} {errorcontext = fo=_raw | 'securemsg(}'. Please help. Thanks      
How would I add a permanent search or field to a sourctype?  For example: I have a set of a data that I have been able to snag a field out of using this search sourcetype="collectedevents" | rex fi... See more...
How would I add a permanent search or field to a sourctype?  For example: I have a set of a data that I have been able to snag a field out of using this search sourcetype="collectedevents" | rex field=_raw "<Computer>(?<Computer>[^<]+)</Computer>" Our sourcetype is "collectedevents"  And I found the way to pull the <Computer> field that was in the XML data down to a field "Computer" But what I would like to be able to do is to have that field be permanent, or transpose the "host =" to not be the host of the WEC but the host of the origin server that it came from.   Long story short, we have servers that we don't want the Splunk Forwarder on because we know that it can execute scripts creating a vulnerability with the Splunk Forwarder on these servers.  Any help is appreciated, thank you!
Hello, We have been investigating on missing 30% of Splunk logs in our production environment. I'm thinking it maybe due to TIME_FORMAT or due to high volume logs on production. Can you please let m... See more...
Hello, We have been investigating on missing 30% of Splunk logs in our production environment. I'm thinking it maybe due to TIME_FORMAT or due to high volume logs on production. Can you please let me know what should be the key-value for TIME_FORMAT on props.conf file?  Lagsec value is 1.5seconds on source logs and the splunk forwarder log source type where we are checking has 1.13s.  Additionally, source logs have format: 05/Mar/2024 SplunkForwarder logs have format: 2024-03-05 2048kbps on both dev and prod config file. Also, have ignoreOlderThan=1d so, looking to remove this parameter and fix TIME_FORMAT and check out. Can you please help or provide additional information to check?
Thanks in Advance. 1.I have a json object as content.payload{} and need to extract the values inside the payload.Already splunk extract field as content.payload{} and the result as  AP Import flow ... See more...
Thanks in Advance. 1.I have a json object as content.payload{} and need to extract the values inside the payload.Already splunk extract field as content.payload{} and the result as  AP Import flow related results : Extract has no AP records to Import into Oracle". But I want to extract all the details inside the content.payload. How can extract from splunk query or from props.conf file.I tried spath but cant able to get it. 2.How to rename wildcard value of content.payload{}* ?     "content" : { "jobName" : "AP2", "region" : "NA", "payload" : [ { "GL Import flow processing results" : [ { "concurBatchId" : "4", "batchId" : "6", "count" : "50", "impConReqId" : "1", "errorMessage" : null, "filename" : "CONCUR_GL.csv" } ] }, "AP Import flow related results : Extract has no AP records to Import into Oracle" ] },      
Hi, we are using ITSI Service map/Service Analyzer to monitor services.  we have an use case where for same service we need to add multiple KPI and those KPI depends on different entities. For Ex... See more...
Hi, we are using ITSI Service map/Service Analyzer to monitor services.  we have an use case where for same service we need to add multiple KPI and those KPI depends on different entities. For Example: We have Infrastructure related KPI which uses host as entity, another KPI is "service Up" which basically check service is up and in this case entity is "process name".  Also have KPI for Garbage collection which also has different entity. Question: I am trying to understand which is the best way to handle such scenario. where we can add all these KPI without making service map too complex.  
I want to pass dynamic value from my search result into email alert subject. I try $result.fieldname$ but it not coming up in the email alert  can someone help me? Thanks
Hi all, I set a corn job on alert my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm 0 0-21, 7-23 5-9 3 1-7 is this corn job correct?? Do I ne... See more...
Hi all, I set a corn job on alert my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm 0 0-21, 7-23 5-9 3 1-7 is this corn job correct?? Do I need to do any changes????
Hello everyone,  I followed steps to install DSDL : https://docs.splunk.com/Documentation/DSDL/5.1.1/User/InstallDSDL and do this scenario here https://www.sidechannel.blog/en/detecting-anomalies-us... See more...
Hello everyone,  I followed steps to install DSDL : https://docs.splunk.com/Documentation/DSDL/5.1.1/User/InstallDSDL and do this scenario here https://www.sidechannel.blog/en/detecting-anomalies-using-machine-learning-on-splunk/ But when I'm trying to start the container :    I get 403 error :  I checked roles, capabilities  I checked all kind of posts from the community I checked global permissions on DSDL  Is there a known issue about that ?  Have a good day all, Betty
Hi, I'd lilke to create a detailed report with info including the type of forwarder, the average KB/s, the OS, the IP, the splunk version but also with information to which exact index the forwarder... See more...
Hi, I'd lilke to create a detailed report with info including the type of forwarder, the average KB/s, the OS, the IP, the splunk version but also with information to which exact index the forwarder forwards to.  Is it possible to recreate the search from the monitoring console for forwarder instance and use it somehow to connect it to each index?  `dmc_get_forwarder_tcpin` hostname=* | eval source_uri = hostname.":".sourcePort | eval dest_uri = host.":".destPort | eval connection = source_uri."->".dest_uri | stats values(fwdType) as fwdType, values(sourceIp) as sourceIp, latest(version) as version, values(os) as os, values(arch) as arch, dc(dest_uri) as dest_count, dc(connection) as connection_count, avg(tcp_KBps) as avg_tcp_kbps, avg(tcp_eps) as avg_tcp_eps by hostname, guid | eval avg_tcp_kbps = round(avg_tcp_kbps, 2) | eval avg_tcp_eps = round(avg_tcp_eps, 2) | `dmc_rename_forwarder_type(fwdType)` | rename hostname as Instance, fwdType as "Forwarder Type", sourceIp as IP, version as "Splunk Version", os as OS, arch as Architecture, guid as GUID, dest_count as "Receiver Count", connection_count as "Connection Count", avg_tcp_kbps as "Average KB/s", avg_tcp_eps as "Average Events/s"   And probably somehow join it with  | tstats count values(host) AS host WHERE index=* BY index   The issue I see is that it searches dmc_get_forwarder_tcpin which is equal to index=_internal sourcetype=splunkd group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) fwdType=* guid=* and I cannot find the indexes there. How can i connect it to each index?
So, I have a chart function that works perfectly! | chart sum(transactionMade) over USERNUMBER by POSTDATE But, I want my chart to have USERNUMBER and USERNAME. They are both correlated, so it shou... See more...
So, I have a chart function that works perfectly! | chart sum(transactionMade) over USERNUMBER by POSTDATE But, I want my chart to have USERNUMBER and USERNAME. They are both correlated, so it should not be an issue. I also want to add Team Number, which there is no correlation to USERNUMBER and USERNAME. Is it possible to have multiple fields after over? I can concatenate all the fields into one string, but it would be easier if they were separate columns. Thank you! 
We are currently changing our splunk server to a new one and during the change there was a mix up and we got data sent to the old instance (about 12h worth) which we would like to transfer to our new... See more...
We are currently changing our splunk server to a new one and during the change there was a mix up and we got data sent to the old instance (about 12h worth) which we would like to transfer to our new splunk instance. My thought was to do a search on the old one and then export the results, when I do this as a RAW format and then import it to the new one the data looks good but the field extracts for WinEventLog is not applied as it should (even tho I use the same Event type) how can I solve this? I've also tried to export it as xml, json, csv but the data looks worse than using RAW
How to Create Dataset inside DataModel and add new Fields in Dataset using Splunk SDK for Java
We need to monitor Azure API Management self-hosted gateway and get all the traces. The gateway is AKS container with image mcr.microsoft.com/azure-api-management/gateway:v2. Inside is .NET applicati... See more...
We need to monitor Azure API Management self-hosted gateway and get all the traces. The gateway is AKS container with image mcr.microsoft.com/azure-api-management/gateway:v2. Inside is .NET application with  /app $ dotnet --info Host: Version: 6.0.26 Architecture: x64 Commit: dc45e96840 .NET runtimes installed: Microsoft.AspNetCore.App 6.0.26 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.NETCore.App 6.0.26 [/usr/share/dotnet/shared/Microsoft.NETCore.App] We would expect to monitor it the same way as any other .NET application, but we dont catch any BT or traces. Also we inject following env - name: CORECLR_PROFILER value: "{57e1aa68-2229-41aa-9931-a6e93bbc64d8}" - name: CORECLR_ENABLE_PROFILING value: "1" - name: CORECLR_PROFILER_PATH value: "/opt/appdynamics-dotnetcore/libappdprofiler.so" - name: LD_DEBUG value: all - name: LD_LIBRARY_PATH value: /opt/appdynamics-dotnetcore/dotnet - name: IIS_VIRTUAL_APPLICATION_PATH value: "/" Please help.
Im using splunk enterprise v9.0.2.1 and MQTT modular input  app is installed. When receiving json input for MQTT modular input getting  ERROR JsonLineBreaker [1946662 parsing] - JSON StreamId:1164... See more...
Im using splunk enterprise v9.0.2.1 and MQTT modular input  app is installed. When receiving json input for MQTT modular input getting  ERROR JsonLineBreaker [1946662 parsing] - JSON StreamId:11645015375736311559 had parsing error:Unexpected character while looking for value: 'W' - data_source="mqtt", data_host="local", data_sourcetype="jklg_json". jklg_json props.conf: DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = (\{\"message\":.*\}) NO_BINARY_CHECK = true category = Custom description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = 1 SHOULD_LINEMERGE = false sample json: {"message":"hi","name":"jklg"} How to resolve this issue?  
Unable to import splunk-sdk  and splunklib python. Here are the error's I'm getting while importing. Any suggestions?    splunklib: error: command 'C:\\Program Files (x86)\\Microsoft Visual S... See more...
Unable to import splunk-sdk  and splunklib python. Here are the error's I'm getting while importing. Any suggestions?    splunklib: error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.39.33519\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pycrypto Running setup.py clean for pycrypto Failed to build pycrypto ERROR: Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects   splunk-sdk : line 18, in <module> from splunklib.six.moves import map ModuleNotFoundError: No module named 'splunklib.six.moves' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output.