All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have this strings, each of one different one of another. How can I split this lines just to get the values with "FEA_" and remove everything else after the "-"?  
Hi Team, I am getting this error on the universal forwarder. 07-10-2023 14:18:24.639 +0200 WARN TailReader [16165 tailreader1] - Could not send data to output queue (parsingQueue), retrying... ... See more...
Hi Team, I am getting this error on the universal forwarder. 07-10-2023 14:18:24.639 +0200 WARN TailReader [16165 tailreader1] - Could not send data to output queue (parsingQueue), retrying... 07-10-2023 12:59:18.463 +0200 INFO HealthChangeReporter - feature="TailReader-1" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." in the UF i configured [thruput] maxKBPS=0 CPU usage is below 50%. but still i am facing the issue i am getting these figures when using perc95(current_size)=7020604.800000001 in metric logs name=tcpout_SplunkCloud The errors are in the universal forwarder and the logs from the uf are being pushed to Splunk cloud i.e indexers on Cloud. @gcusello Please help me on this   
Hi All    I have a table showing number of Helpdesk calls and count for the year  eg  | search "problemtype.detailDisplayName"!=*AGRESSO* | eval problem_detail='problemtype.detailDisplayName... See more...
Hi All    I have a table showing number of Helpdesk calls and count for the year  eg  | search "problemtype.detailDisplayName"!=*AGRESSO* | eval problem_detail='problemtype.detailDisplayName' | eval problem_detail=replace(problem_detail, "&#8226","") | eval problem_detail=replace(problem_detail, ";","|") | eval techGroupLevel = 'techGroupLevel.levelName' | eval techGroupLevel = replace(techGroupLevel, " "," ") | eval techGroupLevel = replace(techGroupLevel, " ","") | eval techGroupLevel = replace(techGroupLevel, "Level"," Level") | eval location_Name = 'location.locationName' | eval status = 'statustype.statusTypeName' | eval priority = 'prioritytype.priorityTypeName' | eval techGroupId = 'techGroupLevel.id' | eval tech_Name = 'clientTech.displayName' | table _time id displayClient location_Name problem_detail detail type bookmarkableLink status priority techGroupId techGroupLevel tech_Name reportDateUtc lastUpdated closeDate | search techGroupLevel = "*" | stats count as tech_group_requests by techGroupLevel | sort -tech_group_requests techGroupLevel                                   tech_Group_Requests  Hardware Level 1                                  10000 Applications Level 1                                  800 Printer                                                           758 MIS                                                                      7 NULL                                                                    8   i would like to combine results of Hardware Level 1 and Printer and form new definition of "Device Management " Also i would like to combine MIS and NULL as 'Other' both to show count combined    ie  techGroupLevel                                   tech_Group_Requests  Device Management                                  10758 Applications Level 1                                  800 Other MIS                                                            15   I have used | eval techGroupLevel=case(match(techGroupLevel, "HARDWARE"), "Device Management" but i'm stuck on how to include printer to this code thank you         
Hello Splunkers, today I was asked, if it would be possible to view specific glasstables in ITSI without having to login to Splunk first (e.g. for displays or for the upper management)? Unfortunate... See more...
Hello Splunkers, today I was asked, if it would be possible to view specific glasstables in ITSI without having to login to Splunk first (e.g. for displays or for the upper management)? Unfortunately I only found the article for embedding regular Splunk reports, but nothing for ITSI glasstables. Do you know, if this is possible in general and (if yes:) where I could find some information on how to do it? Thank you in advance, Carsten
Hello, I'm attempting to install an older version of an add-on in Splunk Cloud. I've tried installing it using the "install from a file" option, but I received a warning stating, "This app is avail... See more...
Hello, I'm attempting to install an older version of an add-on in Splunk Cloud. I've tried installing it using the "install from a file" option, but I received a warning stating, "This app is available for installation directly from Splunk base. To install the app, visit the App Browser page in Splunk Web." However, when I try to install the app from the "Find more apps" page, it only allows me to install the latest updated version, not any older releases. I'm wondering if the only option to install the older version of the add-on is through a Splunk Support ticket. Please provide guidance. Thank you,
I want to all saved search like Alerts or Reports which is Scheduled but after running the query the there is no event  or no data 
Hi  i have dataset where data is ingested into  splunk once a day at 5PM everyday Below is the dataset  USED_SPACE and TOTAL SPACE are in MB  date=10/07/2023 17:00:00 drive=HD USED_SPACE=10 TOT... See more...
Hi  i have dataset where data is ingested into  splunk once a day at 5PM everyday Below is the dataset  USED_SPACE and TOTAL SPACE are in MB  date=10/07/2023 17:00:00 drive=HD USED_SPACE=10 TOTAL_space=100 date=09/07/2023 17:00:00 drive=HD USED_SPACE=12 TOTAL_space=100 date=08/07/2023 17:00:00 drive=HD USED_SPACE=13 TOTAL_space=100 date=07/07/2023 17:00:00 drive=HD USED_SPACE=10 TOTAL_space=100 Based on the growth of the used_space per day i want to calculate the days left for the drive to reach TOTAL space basically in how many days would it reach for Total space need a separate column as days remaining   
Hi, I have an accelerated datamodel. This datamodel have a lookup field based on a KV store lookup, that is, the datamodel takes another calculated numerical field and use the lookup to get a stri... See more...
Hi, I have an accelerated datamodel. This datamodel have a lookup field based on a KV store lookup, that is, the datamodel takes another calculated numerical field and use the lookup to get a string value for that number. However, after some testing I see that my lookup field the accelerated datamodel does not have the correct value. I cross-checked by using the lookup directly on the raw data, and then it workes fine. I also double checked the expression for the lookup field, making sure it actually uses the correct values. The interesting thing is that the lookup field is indeed calculated, so the  KV store lookup works, and it get a "hit" in the lookup, but the row that is returned from the lookup is wrong. It seems to me that this could be some kind of bug, e.g. related to memory on the search head or something, that causes the lookup field accelerated datamodel to bug out. The KV store lookup is quite big. Anyone else have experienced this, and knows a way around? My lookup field in the datamodel:          { [-]            calculationID: 123456            calculationType: Lookup            comment:            editable: true            lookupInputs: [ [-]              { [-]                inputField: mynumber                lookupField: mynumber              }            ]            lookupName: number_to_string            outputFields: [ [-]              { [-]                comment:                displayName: mystring                editable: true                fieldName: mystring                fieldSearch:                hidden: false                lookupOutputFieldName: mystring                multivalue: false                owner: mydatamodel                required: false                type: string              }            ]            owner: mydatamodel          }  The field "mynumber" is based on an eval expression in the datamodel (which is defined before the lookup field):          { [-]            calculationID: 456789            calculationType: Eval            comment:            editable: true            expression: 'mynumber_raw'            outputFields: [ [-]              { [-]                comment:                displayName: mynumber                editable: true                fieldName: mynumber                fieldSearch:                hidden: false                multivalue: false                owner: mydatamodel                required: false                type: string              }            ]            owner: mydatamodel          }  
Hi, I try to use two different version mysql jdbc connector with splunk_app_db_connect. Because one of my db's doesn't support latest version jdbc driver. Also there is no version work together. So I... See more...
Hi, I try to use two different version mysql jdbc connector with splunk_app_db_connect. Because one of my db's doesn't support latest version jdbc driver. Also there is no version work together. So I have to use two jdbc driver. Is there any example configuration for this ?
The hadoopmon_spark.sh script is missing in Hadoop Monitoring Add-on
Hi, I try to listen local network adapter and localhost traffic. For that I am using splunk stream on windows 10 machine. But I noticed splunk stream doesn’t capture traffic localhost and network ada... See more...
Hi, I try to listen local network adapter and localhost traffic. For that I am using splunk stream on windows 10 machine. But I noticed splunk stream doesn’t capture traffic localhost and network adapters same time. Is there a any bug ? Or that’s normal ? My main purpose is listen TDS traffics. If I write adapters names one by one is that works. For example in streamfwd.conf       [streamfwd] port = 8889 ipAddr = 127.0.0.1 streamfwdcapture.0.interface = \Device\NPF_{D4BEDB74-F8CD-4A72-8615-ABF1E3E8823B}       Thats work. Also that’s work.       [streamfwd] port = 8889 ipAddr = 127.0.0.1 streamfwdcapture.0.interface = \Device\NPF_Loopback       But that doesn’t work.       [streamfwd] port = 8889 ipAddr = 127.0.0.1 streamfwdcapture.0.interface = \Device\NPF_{D4BEDB74-F8CD-4A72-8615-ABF1E3E8823B} streamfwdcapture.1.interface = \Device\NPF_Loopback       How can I fix this. I have to listen all interfaces.
Hi Splunkers, for a customer we are preforming a migration in Windows Logs collection: as suggested by some of you in another topic, we are passing from WMI method to UF one (and it is very, very, ve... See more...
Hi Splunkers, for a customer we are preforming a migration in Windows Logs collection: as suggested by some of you in another topic, we are passing from WMI method to UF one (and it is very, very, very - Have I already told "very"? - better). We encountered a difference with WMI we don't know how to solve, and here I am to ask your help. First, a little recap of architecture: UF are installed on DC and then data are sent to an HF, which following forward data to a Splunk Cloud instance. So the flow is: DCs with UF installed -> HF -> Splunk Cloud. When we configured WMI, we selected the "classic" logs (Security, Application, System) plus DNS and PowerShell. In particular, our SOC is interested in DNS query logs. When we installed the UF (with graphic wizard), we found only the "classic" options: Security, Application and System.  If we want to collect also DNS query logs and PowerShell one, how can we achieve this using UF? I suspect we need to modify the inputs.conf file, but is my assumption correct? And if yes, how can I go on?  
hi All, I  installed Splunk Enterprise  windows v9.1.0.1 but o inside C:\Program Files\Splunk> I can see only these folders /etc /python , share and /var , other folders are missing including /bin.... See more...
hi All, I  installed Splunk Enterprise  windows v9.1.0.1 but o inside C:\Program Files\Splunk> I can see only these folders /etc /python , share and /var , other folders are missing including /bin. Has anyone else faced this issue?
step1: search commands return strings that has url encoded character step2: after used urldecode funtion step3: after used stats commands   what can i do? Any help is highly ap... See more...
step1: search commands return strings that has url encoded character step2: after used urldecode funtion step3: after used stats commands   what can i do? Any help is highly appreciated.:)   step4: I tried using decrypt2 app, but it doesn't work if the string is long.   
Hello, I want to extract certain words only and exclude that comes after numbers. ex. Apple12ed Apple456ppp Orange234iw Banana7ye expected result: Apple  Orange  Banana  I have tried below... See more...
Hello, I want to extract certain words only and exclude that comes after numbers. ex. Apple12ed Apple456ppp Orange234iw Banana7ye expected result: Apple  Orange  Banana  I have tried below, but the each string has different numbers and words, so the result is not correct. | eval Fruits = substr(Fruits, 1, len(Fruits)-4) incorrect Result: Apple Apple45 Orange2 Banan Thanks in advance.
Hi All, the below is a bar chart which has the ordered the blocks of bar chart according to alphabet but i want in this order pa , failed, aborted, not tested. and also is it possible to rearrange ... See more...
Hi All, the below is a bar chart which has the ordered the blocks of bar chart according to alphabet but i want in this order pa , failed, aborted, not tested. and also is it possible to rearrange like this in XML code itself ?  
Hello Community, I am fairly new to Splunk, and I am struggling with this. Here is my raw event: these are discrepancy events that show a reported discrepancy in the two JSONs (for the context of th... See more...
Hello Community, I am fairly new to Splunk, and I am struggling with this. Here is my raw event: these are discrepancy events that show a reported discrepancy in the two JSONs (for the context of this problem, those JSONs are not necessary to be known). Assuming there are n events similar to what we have in the sample JSON.     { "severity": "INFO", "time": "2023-07-09 18:53:53.930", "Stats": { "discrepancy" : 10 }, "discrepancyDetails": { "record/0": "#DEL", "record/1": "#DEL", "recordD": "#DEL", "recordX": "expected => actual", "recordY": "someExpectedVal => null", <-- actual value is null in this case "recordN": "someExpectedValN => null" } }     Stats.discrepancy provides the total count, while discrepancyDetails provides the actual discrepancy. I want to fetch some statistics from this, which involve the following details: All the unique discrepancyDetails with their respective counts. Before finding the count, I want to remove all numerical characters from the key. For example, in the same JSON, we have two keys in the discrepancyDetails: "record/1" and "record/2". I want to treat these keys as "record/" and replace the numeric strings with an empty value. figure out all the keys with null actual (from sample json "expected => actual") values and "#DEL" (deleted) values I was able to obtain the unique count of all the keys using the following query.     index="demo1" sourcetype="demo2" | search discrepancyDetails AND Stats | spath "Stats.discrepancy" | search "Stats.discrepancy" > 0 | stats count(discrepancyDetails.*) as discrepancyDetails.* | transpose       I am unable to figure out points 2 and 3 from the above requirements. Desired output for requirement 2  considering above sample json: Unique_key count record/ 2 recordD 1 recordX 1 recordY 1 recordN 1 Desired output for requirement 3  considering above sample json: Unique_key null or #DEL count record/ recordD #DEL 2 recordY recordN  null 2
Is it possible for Splunk to do FAT/Thick client or desktop-based application synthetic transactions monitoring? For ex: Capture availability, performance, page load KPIs for below scenario and perf... See more...
Is it possible for Splunk to do FAT/Thick client or desktop-based application synthetic transactions monitoring? For ex: Capture availability, performance, page load KPIs for below scenario and perform tests against these KPIs/recorded script. user logs into his/her laptop -->access an application by clicking a GUI client-->performs some transactions --> close the GUI client.  
Is it expected for no option to launch the app nor configure data input?  
Hi, I want to restore a KVLookup from a backup system , and not from the Splunk backup. Is there a way to restore a KVLookup file in case I don't have the backup set up in Splunk? Or do I have t... See more...
Hi, I want to restore a KVLookup from a backup system , and not from the Splunk backup. Is there a way to restore a KVLookup file in case I don't have the backup set up in Splunk? Or do I have to configure the Splunk backup to restore? If so, where is the KVLookup file that contains the data itself located (beside $SPLUNK_HOME/var/lib/splunk/kvstore that contains collection.conf and transforms.conf)? Appreciate your assistance, Thanks!