All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone,  I have a problem with a request.  I tried with this:  index="main" sourcetype="st_easyvista_generic" "Identifiant réseau"="PCW-*" Statut="En Service" |dedup "Identifiant réseau" |... See more...
Hello everyone,  I have a problem with a request.  I tried with this:  index="main" sourcetype="st_easyvista_generic" "Identifiant réseau"="PCW-*" Statut="En Service" |dedup "Identifiant réseau" | stats values(Entité (complète)) as entité |eval ss=mvindex(split(entité,"/"),0) |stats count by ss It works only for the line that contains a slash.  I don't know how to process to have the first field extracted for all the results.  Thank you so much for your help
Hai All, from the below search  how to convert secs to HH:MM format  age fields is getting time in secs   index=_internal source=*metrics.log group=tcpin_connections earliest=-2d@d | eval Ho... See more...
Hai All, from the below search  how to convert secs to HH:MM format  age fields is getting time in secs   index=_internal source=*metrics.log group=tcpin_connections earliest=-2d@d | eval Host=coalesce(hostname, sourceHost) | eval age = (now() - _time ) | stats min(age) as age, max(_time) as LastTime by Host | convert ctime(LastTime) as "Last Active On" | eval Status= case(age < 1800,"Running",age > 1800,"DOWN") | rename age as Age | sort Status | table Host, Status, Age,"Last Active On"
Hi,   I have a query that outputs a table with the following 3 fields: Latitude, Longitude, Count   How can I turn this query into a JSON format so that I can use it with React?   Many ... See more...
Hi,   I have a query that outputs a table with the following 3 fields: Latitude, Longitude, Count   How can I turn this query into a JSON format so that I can use it with React?   Many thanks, Patrick
Hi,  I'm searching a way to create a dropdown in Dashboard, in which the values in the dropdown is grouped. For example with the table below, I want to have a dropdown with 2 values called "Site"... See more...
Hi,  I'm searching a way to create a dropdown in Dashboard, in which the values in the dropdown is grouped. For example with the table below, I want to have a dropdown with 2 values called "Site": Washington, California. Then later in the statistic table, it will list IDs according to the "Site" Address ID 1 Microsoft Way, Redmond, Washington 132 15 Microsoft Way, Redmond, Washington 456 10 Microsoft Way, Redmond, Washington 789 1 Infinite Loop, Cupertino, California 111 2 Infinite Loop, Cupertino, California 222 3 Infinite Loop, Cupertino, California 489 I imagine to have a list of label in the dropdown, then search for the values correspond with each lable:     <input type="dropdown" token="site" searchWhenChanged="true"> <label>Site</label> <choice value="Washington">Washington</choice> <choice value="California">California</choice> <choice value="*">All</choice> </input>           |inputlookup address.csv |where like(Address,"%Washington%")     But  I don't know how to put it together to work. Do you have an idea how it can work or another idea, please? Thanks in advanced!
Hi All, Good day, we have installed forwarders in multiple windows servers. any splunk search to know the memory usage of all servers or high usage greater than 80%  and also what process takin... See more...
Hi All, Good day, we have installed forwarders in multiple windows servers. any splunk search to know the memory usage of all servers or high usage greater than 80%  and also what process taking high usage.   Thanks \
Hi All,  I am trying to tabulate the error ratio based on the following scenarios from the unique log event but further using the regex to split the error code causing the total events to be filter... See more...
Hi All,  I am trying to tabulate the error ratio based on the following scenarios from the unique log event but further using the regex to split the error code causing the total events to be filtered out causing the overall hits to be incorrect while % calculation as the initial no of unique events is not getting preserved with eventstats  sample log event is in json format as below and multiple errorcodes in same log event which needs a error wise split log:<<field1>>,<<field2>>,<<field3>>,error=60KOANEWLH=500.EBS.SYSTEM.100:67MPW4X79FOJ=500.IMS.SERVEROUT.100:3534U6ZIZY39=500.EBS.SERVERIN.100;404.IMS.SERVEROUT.105:3M8TEWEKVIJK=500.IVS.XXXXX.100;404.IMS.XXXX.105:2ILTH9G0UMG1=500.IMS.XXXXXXXX.100:0UAQL48U2KWF=500.EBS.XXXXXXX.100;404.IMS.XXXXXXXXX.105, missingFulfillmentItems,<<field4>>,<<field5>>,<<field6>> i would like to get each error code % mainly (500.XX.XXXX.100 count/total hits)  below is the splunk search filter been used but not getting totalevents, please correct me if there is anything missed,,could someone please assist with an alternate option to compute the error trend..Thanks in advance index=<indexname> "Search String" "Type"=prod  | eventstats sum(index) as total_hits | rex field="log.log" ", error=*(?<errorMap2>.+), missingFulfillmentItems" | eval errors0=replace(errorMap2, "=", ";") | eval errors1=split(errors0,":") | rex field=errors1 "(?<errorCodes>.*)" | mvexpand errorCodes | eval errorCodes1=split(errorCodes, ";") | mvexpand errorCodes1 | where like(errorCodes1,"%500.IMS.%") | stats count by errorCodes1,total_hits Note: each log event is unique and has multiple error codes with in the event or no error codes in the event if its success
Hi, I have been trying security lake for a few days, after dealing with lots of errors and all i was finally able to activate security lake in my account, but further, I wanted to ingest that data ... See more...
Hi, I have been trying security lake for a few days, after dealing with lots of errors and all i was finally able to activate security lake in my account, but further, I wanted to ingest that data into Splunk, I refer to the following official document to connect my AWS to Splunk, https://github.com/splunk/splunk-add-on-for-amazon-security-lake/blob/main/Splunk%20Add-on%20for%20Amazon%20Security%20Lake.pdf it may seem for me that AWS account is connected, but there is some permission issue regarding SQS, when I am trying to configure input I am getting error for Access denied to listqueues. I checked for permissions, but it is already being given for the role. Requesting you please help me with that as this security lake is completely new in AWS, and there are not many resources available to look for. i am attaching screenshot of error in Splunk  
Hi, We have deployed splunk on-prem components, heavy forwarder, syslog-ng and deployment server.  Configured it correctly, we think, we can install the universal forwarder on an endpoint and see... See more...
Hi, We have deployed splunk on-prem components, heavy forwarder, syslog-ng and deployment server.  Configured it correctly, we think, we can install the universal forwarder on an endpoint and see the endpoint in the on-prem console.   Now to get his information flowing out to splunk cloud, we have to go via a proxy server as it is the only way out of the environment. We've configured the servers, and proxy, to allow it to communicate with the yum repositories, so we know it can get out and connect. The issue we are now having is we don't seem able to get the data flowing out to the cloud, I've followed the below article, correctly I believe: https://docs.splunk.com/Documentation/Splunk/9.0.2/Forwarding/ConfigureaforwardertouseaSOCKSproxy This config should be on the heavy forwarder, is that correct? The proxy server is configured correctly we also believe, but that could also be wrong. Anyone have any ideas/pointers/advice on why this doesn't work and how we resolve it?
Hi hello, I am getting data with destitantion country and based on that I need to show country flag in dashboard. I can upload images of flags to appserver/static, but I was thinking if there is ... See more...
Hi hello, I am getting data with destitantion country and based on that I need to show country flag in dashboard. I can upload images of flags to appserver/static, but I was thinking if there is any other solution? Does Splunk provide som database of flags? What would be the best approach? Thank you
Hello, Appdynamics agents cannot be exported. There are many similar topics in the forum. However, there were no definitive answers in my opinion. I'm not a programmer. That's why I'm sharing the... See more...
Hello, Appdynamics agents cannot be exported. There are many similar topics in the forum. However, there were no definitive answers in my opinion. I'm not a programmer. That's why I'm sharing the script I prepared for those who need ready-made codes like me. For non-programmers like me, I think it will be useful for most people if such sample codes are shared. It may be under a separate title, but I think it would be really good for everyone and appdynamics teams to share powershell, python, api query examples. (Life gets better when shared ) sorry but moderator friends are just giving directions. For those who don't know software like me please I ask developers and script writers to share more scripts and software code. import pandas as pd import requests import urllib3 urllib3.disable_warnings() from requests.auth import HTTPBasicAuth auth = HTTPBasicAuth('yourusername@customer1', 'yourpasswd') app_url='https://yourapmurl/controller/rest/applications?output=json' app_resp=requests.get(app_url , verify=False, auth=auth).json() dfs=list() for item in app_resp: name,node_id=item['name'],item['id'] node_url=f'https://yourappmurl/controller/rest/applications/{node_id}/nodes?output=json' response=requests.get(node_url , verify=False, auth=auth).json() df=pd.DataFrame(response) if len(df)>0: df=df[['machineName','tierName','appAgentVersion']] df['name']=name df['node_id']=node_id else: print(node_id) dfs.append(df) result=pd.concat(dfs).reset_index(drop=True) save_dir=r'export_your_path' result.to_excel(f'{save_dir}\\yourdata.xlsx',index=False)
Hi, I was just wondering what level of access is required to view other users private PDF schedules? And how these can then be made visible, with the correct level of access? Thanks, Rob.
I am calculating a health rate for projects based on specific criteria, generaly its the SUM of projects ranked A or B divided by the total number of projects.   I am trying to display a timechart of... See more...
I am calculating a health rate for projects based on specific criteria, generaly its the SUM of projects ranked A or B divided by the total number of projects.   I am trying to display a timechart of the Health Score as a function of time but with no luck. Here is my search:     basesearch | streamstats values(pipelineRun{}) as pipelines dc(pipelineRun{}) as num_pipelines by fullPath | spath path=project.Findings output=Findings | mvexpand Findings | spath input=Findings | eval ProjectRank=mvappend(ProjectRank, case(A>0 OR B>9, "F", A=1 OR (B<9 AND B>2) , "B", A=0 AND B=0, "A")) | eval PipelinesRank=mvappend(PipelinesRank, if(num_pipelines>8, "A", "F")) | eval ProjectFinalRank=mvappend(ProjectFinalRank, case(ProjectRank="F" OR PipelinesRank="F", "F", PipelinesRank="A" AND ProjectRank="B", "B", PipelinesRank="A" AND ProjectRank="A", "A")) | stats count by group ProjectFinalRank | stats sum(eval(if(ProjectFinalRank="A" OR ProjectFinalRank="B",count,0))) AS HIGH sum(count) AS Total by group | eval HealthRate=round(HIGH*100/Total,2)    
Hi all, Within Splunk ES I've configured a test threat intelligence feed with the following settings: New > Line oriented Name: Binary Defense Banlist type: network url: https://www.binar... See more...
Hi all, Within Splunk ES I've configured a test threat intelligence feed with the following settings: New > Line oriented Name: Binary Defense Banlist type: network url: https://www.binarydefense.com/banlist.txt weight: 60 interval: 43200 Max Age: -30d Max Size: 52428800 Checked Threat Intelligence File parser: line Delimiting regular exp:  Extracting regex: ^(\d.+)$ Ignoring regex: (^#|^\s*$) fields: ip$1,description:BinaryDefense_banlist skip header lines: 0 No encoding, no user agent, sinkhole checked. Some global parse modifier settings: Certificate attribute breakout = checked IDNA encode domains = unchecked Parse domain from URL = unchecked In debug mode I see that the file is downloaded and then it says: <timestamp> INFO pid=1050977 tid:MainThread file=get_parser.oy:_detect_file_type:139 | stanza"binary Defense Banlist" status="Automatically detected STIX parsing for file_path /opt/splunk/var/lib/splunk/modinputs/threatlist/Binary Defense Banlist" It goes on to parse the file and get the records. However, the records contain HTML elements like <'\div> and <\iframe> as url value. This is strange since it's just a .txt file. Moreover, why is it parsing it like a STIX document when I explicitly stated that the File parser = line? This happens with other threat feeds as well. I've checked with a colleague at another client and with the exact same settings his works and mine doesn't.   Am I missing something? Do you know where else I can look to troubleshoot?   Some figures: Splunk: 8.2.9 ES: 7.0.1 Single search head, behind proxy
    index="hx_vm" LogName="Microsoft-Windows-Sysmon/Operational" "EventCode=11" ComputerName=DESKTOP-933JR8B | eval {name} = replace("C:\Windows\SysWOW64\OneDriveSetup.exe","\", "\\") | search Im... See more...
    index="hx_vm" LogName="Microsoft-Windows-Sysmon/Operational" "EventCode=11" ComputerName=DESKTOP-933JR8B | eval {name} = replace("C:\Windows\SysWOW64\OneDriveSetup.exe","\", "\\") | search Image = name | table _time, TargetFilename     The variable usage part is difficult.
What is the difference between these Add-ons? Is there any reason you would install both? Also, should syslog be turned off on the esxi and vCenter entities to keep from duplicating data? Splunk ... See more...
What is the difference between these Add-ons? Is there any reason you would install both? Also, should syslog be turned off on the esxi and vCenter entities to keep from duplicating data? Splunk Add-on for VMware Metrics (https://splunkbase.splunk.com/app/5089) and Splunk Add-on for VMware (https://splunkbase.splunk.com/app/3215) thanks, Ken
Is there a way to get alerts when routers or switches go down on your network or any endpoint?    V/R SD
Good evening everyone.... Being that the Splunk ADD-ON for Infrastructure is now end of life is there any other way to monitor network devices 
our main Splunk administrator retired and we since disabled his Active Directory account which he used to create and manage hundreds of Splunk searches, now listed as Orphaned under Settings \ All Co... See more...
our main Splunk administrator retired and we since disabled his Active Directory account which he used to create and manage hundreds of Splunk searches, now listed as Orphaned under Settings \ All Configurations \ Reassign Knowledge Objects \ Orphaned we have the option of reassigning these searches to other Domain Accounts belonging to regular Splunk non admin users, or to the built in default Splunk admin account which is a local account on the box with no Domain permissions, so the question is should we do that since there is also this Warning: Knowledge object ownership changes can have side effects such as giving saved searches access to previously inaccessible data or making previously available knowledge objects unavailable. Review your knowledge objects before you reassign them. Running Splunk version 9.0.0 on the Microsoft Windows platform 
Hi,  We're preparing to upgrade SE from 8 to 9 and have a question about this requirement: For distributed deployments of any kind, confirm that all machines in the indexing tier satisfy the follow... See more...
Hi,  We're preparing to upgrade SE from 8 to 9 and have a question about this requirement: For distributed deployments of any kind, confirm that all machines in the indexing tier satisfy the following conditions:  ... ... They do not run their own saved searches If our indexers are also search heads, would that violate this?
I have a SH that is not part of SH Cluster.  The SH is connected to an Index Cluster.  I am seeing the following errors on the Indexers (W.X.Y.Z is the IP address of the SH) ERROR TcpInputProc [231... See more...
I have a SH that is not part of SH Cluster.  The SH is connected to an Index Cluster.  I am seeing the following errors on the Indexers (W.X.Y.Z is the IP address of the SH) ERROR TcpInputProc [2317 FwdDataReceiverThread] - Error encountered for connection from src=W.X.Y.Z:46788. error:140760FC:SSLroutines:SSL23_GET_CLIENT_HELLO:unknown protocol I don't think there is a mismatch of sslVersions.   Please help me troubleshoot this.