All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello i have a raw with 5 columns from the same type and i want to compare the value of the cells of this 5 columns. how can i do it ? thanks
Hi team, ##### Monitor inputs # ERROR Log for SQL Server [monitor://C:\Program Files\Microsoft SQL Server\MSSQL*\MSSQL\Log\ERRORLOG*] sourcetype = mssql:errorlog disabled = 0 index=sqlserver # ... See more...
Hi team, ##### Monitor inputs # ERROR Log for SQL Server [monitor://C:\Program Files\Microsoft SQL Server\MSSQL*\MSSQL\Log\ERRORLOG*] sourcetype = mssql:errorlog disabled = 0 index=sqlserver # Default SQL Server Agent Log for the SQL Server Agent Service of SQL Server [monitor://C:\Program Files\Microsoft SQL Server\MSSQL*\MSSQL\Log\SQLAGENT.OUT] sourcetype = mssql:agentlog disabled = 0 index=sqlserver ##### Windows performance monitoring inputs ### Performance Monitoring for System [perfmon://sqlserverhost:processor] object = Processor counters = % Processor Time instances = _Total interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserverhost:logicaldisk] object = LogicalDisk counters = Avg. Disk sec/Read; Avg. Disk sec/Write instances = * interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserverhost:physicaldisk] object = PhysicalDisk counters = Disk Reads/sec; Disk Writes/sec; Avg. Disk sec/Read; Avg. Disk sec/Write; Avg. Disk sec/Transfer; Disk Read Bytes/sec; Disk Write Bytes/sec;Avg. Disk Queue Length instances = * interval = 60 showZeroValue = 1 disabled = 1 index=sqlserver [perfmon://sqlserverhost:network] object = Network Interface counters = Current Bandwidth; Bytes Total/sec instances = * interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserverhost:memory] object = Memory counters = % Committed Bytes In Use;Pages/sec;Available Mbytes;Pages Input/sec;Free System Page Table Entries interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserverhost:paging_file] object = Paging File counters = % Usage;% Usage Peak instances = * interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserverhost:process] object = Process counters = Private Bytes;% Processor Time instances = sqlservr interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserverhost:system] object = System counters = Processor Queue Length;Context Switches/sec instances = * interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver ### Performance Monitoring for SQL Server [perfmon://sqlserver:buffer_manager] object = (SQLServer|MSSQL[^:]*):Buffer Manager counters = * interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:memory_manager] object = (SQLServer|MSSQL[^:]*):Memory Manager counters = Total Server Memory(KB);Target Server Memory(KB);Granted Workspace Memory (KB);Maximum Workspace Memory (KB);Memory Grants Outstanding;Memory Grants Pending;Target Server Memory (KB) interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:databases] object = (SQLServer|MSSQL[^:]*):Databases counters = Active Transactions;Data File(s) Size (KB);Log File(s) Size (KB);Log File(s) Used Size (KB);Transactions/sec instances = * interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:general_statistics] object = (SQLServer|MSSQL[^:]*):General Statistics counters = User Connections;Processes blocked;Logins/sec;Logout/sec interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:sql_statistics] object = (SQLServer|MSSQL[^:]*):SQL Statistics counters = Batch Requests/sec;SQL Compilations/sec;SQL re-Compilations/sec;SQL Attention Rate/sec;Auto-Param Attempts/sec;Failed Auto-Params/sec;Safe Auto-Params/sec;Unsafe Auto-Params/sec interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:access_methods] object = (SQLServer|MSSQL[^:]*):Access Methods counters = Forwarded Records/sec;Full Scans/sec;Index Searches/sec;Page Splits/sec;Workfiles Created/sec;Worktables Created/sec;Worktables From Cache Ratio;Table Lock Escalations/sec instances = * interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:latches] object = (SQLServer|MSSQL[^:]*):Latches counters = Latch Waits/sec;Avg Latch Wait Time (ms);Total Latch Wait Time (ms) interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:sql_errors] object = (SQLServer|MSSQL[^:]*):SQL Errors counters = Errors/sec instances = DB Offline Errors;Info Errors;Kill Connection Errors;User Errors;_Total interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:locks] object = (SQLServer|MSSQL[^:]*):Locks counters = Number of Deadlocks/sec;Average Wait Time (ms) instances = * interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver [perfmon://sqlserver:transactions] object = (SQLServer|MSSQL[^:]*):Transactions counters = Transactions; Longest Transaction Running Time interval = 60 showZeroValue = 1 disabled = 0 index=sqlserver this is my inputs.conf of mssql add on, I am not getting performence events such as locks, latches, transactions any help regarding this I am using universal forwarder....
which events need to be indexed by microsoft sql add on to monitor dead lock in splunk and how??
How to pass arguments to a script from inputs.conf? example: shell_script.sh server1 server2
Hello Team, I am getting this error while running command while i have checked that i have 4.2 driver of MSSQL . please help me | dbxquery connection="CloudAssessment" query="select DISTINC... See more...
Hello Team, I am getting this error while running command while i have checked that i have 4.2 driver of MSSQL . please help me | dbxquery connection="CloudAssessment" query="select DISTINCT Hostname, cpucount, Operating_System, ram,Landing_Zone, R_Lane, App_Group from [CloudStudio].[dbo].Assessment_MGL where Hostname IS NOT NULL" | eval cpucount = if(like(cpucount,"\"\""),'',cpucount) | eval Operating_System = if(like(Operating_System,"\"\""),'',Operating_System) | eval ram = if(like(ram,"\"\""),'',ram) | table Hostname, cpucount, Operating_System, ram, Landing_Zone, R_Lane, App_Group | dbxoutput output="Velostrata_Input" uniqueKey not found in mappings error We have checked we have all database fields in database .
Hi I am trying to make a time chart visualisation but I want it to be in IST(Indian Standard Time). | eval received = strptime('timestamp', "%Y-%m-%dT%H:%M:%S.%3N") | eval received_I... See more...
Hi I am trying to make a time chart visualisation but I want it to be in IST(Indian Standard Time). | eval received = strptime('timestamp', "%Y-%m-%dT%H:%M:%S.%3N") | eval received_IST = received + 19800 | eval IST = strftime(received_IST,"%Y-%m-%d %H:%M") | timechart count by Status Status is a column which is consisting of true and false in string data type. In IST variable I have the timestamp in IST but how to implement it in the timechart? It is automatically picking up the (_time) by default
Hi All, Please help me in extracting the response values of the below XML snapshot. Finally , I want to display a table like below. After mapping all these values perfectly, I woul... See more...
Hi All, Please help me in extracting the response values of the below XML snapshot. Finally , I want to display a table like below. After mapping all these values perfectly, I would like to display a line graph based on measobjldn. For example, my problem statement is to display the cpu_avg values of 4 different components(management 1, management 2, management 3, management 4) separately., where measinfoid= statistics. I used the below query but it is a hard coded query. How can i map all the values perfectly without hard-coding the values. index=AAA sourcetype=AAAB host=xxx | spath | rename measInfo.measType as Request_type, measInfo.measValue.r as P_value, measInfo.measValue.r{@p} as P_type, measInfo.measValue{@measObjLdn} as MeasobjLdn, measInfo{@measInfoId} as Measinfo_id | table time,host, Measinfo_id ,P_type,P_value | eval temp2=mvzip(P_type,P_value,=) | mvexpand temp2 | rex field=temp2 (?.+)=(?.+) |table _time host Measinfo_id P_type P_value |search Measinfo_id=statistics AND P_type=2 |streamstats count as sno by _time |eval ObjLdn=case(sno==1,"management 1",sno==2,"management 1",sno==3,"management 1",sno==4,"management 1") |table _time host InfoId P_type P_value sno Measobjldn |stats values(P_value) as P_Value by time,host,Measobjldn |xyseries _time Measobjldn P_value Please correct this query and help me
I am actually trying to trigger an alert when Splunk is not receiving the metrics. For now, I am checking if the value is 0 trigger an alert but I am not sure if I am doing it correct. Can someone he... See more...
I am actually trying to trigger an alert when Splunk is not receiving the metrics. For now, I am checking if the value is 0 trigger an alert but I am not sure if I am doing it correct. Can someone help me in this regard? Thanks in advance.
I am searching windows event log. Aftre result search complete, Account_Domain contains following value "- ABC" How can I left ABC inside?? | rex field="Account_Domain" mode=sed "s/([0-... See more...
I am searching windows event log. Aftre result search complete, Account_Domain contains following value "- ABC" How can I left ABC inside?? | rex field="Account_Domain" mode=sed "s/([0-9]{4}) /\1,,/" not working
i have urls that include numeric ids in the path: /api/clients/11111/interactions /api/clients/22222/interactions /api/clients/33333/profiles I need to extract service_name field, skipping the ... See more...
i have urls that include numeric ids in the path: /api/clients/11111/interactions /api/clients/22222/interactions /api/clients/33333/profiles I need to extract service_name field, skipping the ids, ideally: | service_name | count | |---------------------------+-------| | /api/clients/interactions | 2 | | /api/clients/profiles | 1 | please help
Hi, I have a dashboard and when the user enters certain parameters using that tokens a outputlookup file is created which is a base for next panel queries. No my doubt is if multiple users login ... See more...
Hi, I have a dashboard and when the user enters certain parameters using that tokens a outputlookup file is created which is a base for next panel queries. No my doubt is if multiple users login how will it behave ? how can i specify the path of the lookup file to be created as .../etc/users/app/lookup and not ../etc/app/lookup ? That way each user will have a lookup created without conflicting each others . Please help
Hi @Damien Dallimore My question is similar to this one : https://answers.splunk.com/answers/186128 but I need a bit more guidance please (and am on Splunk 7.3.0) I have a REST endpoint that re... See more...
Hi @Damien Dallimore My question is similar to this one : https://answers.splunk.com/answers/186128 but I need a bit more guidance please (and am on Splunk 7.3.0) I have a REST endpoint that returns json but I require the http status codes to compare the json response to. I know that is achieved with a custom response handler and i know how to select the custom handler in the UI but I don't know how to python Please help me
I want to perform calc on following Number of disable account Region Disable Acc HK Yes AA HK No BA US No AA UK No DA eval(if region="*") stat count?
I have a dashboard that takes 3 inputs. (TimePicker, Associate, and Activity). All items (inputs and dash panels) update based on the TimePicker, no problem. The activity is only ever a single ... See more...
I have a dashboard that takes 3 inputs. (TimePicker, Associate, and Activity). All items (inputs and dash panels) update based on the TimePicker, no problem. The activity is only ever a single option (dropdown). However, the Associate is a series of checkboxes. For the dash panel, it is a simple delimiter of " OR Associate=" The problem is that the Activity dropdown also has to update based on the Associates picked, and the delimiter for the dropdown query would be different than the delimiter for the dash panel (much more complicated with a nested eval). I do not see a way to have a different delimiter - is there a way? If not, I wonder if there is a way I can use the selections from one input to populate a second input with a different delimiter?
I am experiencing an issue when I have made updates to the apps on the deployer for a search head cluster and trying to push it out to the nodes. The issue seems to be hitting on DBConnect but mig... See more...
I am experiencing an issue when I have made updates to the apps on the deployer for a search head cluster and trying to push it out to the nodes. The issue seems to be hitting on DBConnect but might be that is just the app that it is hitting it on. The error I am getting is about the file being used by another process. This is the error: Error while deploying apps to first member: Error while updating app=splunk_app_db_connect on target=https://dmmsplunksh 03:8089: Non-200/201 status_code=500; {"messages":[{"type":"ERROR","text":"\n In handler 'localapps': Error installing a pplication: Failed to copy: C:\Splunk\var\run\splunk\bundle_tmp\a1f2000df5b3093c\splunk_app_db_connect to C:\Spl unk\etc\apps\splunk_app_db_connect. 5 errors occurred. Description for first 5: [{operation:\"renaming .tmp file to d estination file\", error:\"The process cannot access the file because it is being used by another process.\", src:\"C:\ Splunk\var\run\splunk\bundle_tmp\a1f2000df5b3093c\splunk_app_db_connect\bin\lib\jtds-1.3.1.jar\", dest:\"C:\Sp lunk\etc\apps\splunk_app_db_connect\bin\lib\jtds-1.3.1.jar\"}, {operation:\"renaming .tmp file to destination file \", error:\"The process cannot access the file because it is being used by another process.\", src:\"C:\Splunk\var\ru n\splunk\bundle_tmp\a1f2000df5b3093c\splunk_app_db_connect\bin\lib\rpcserver-all.jar\", dest:\"C:\Splunk\etc\a pps\splunk_app_db_connect\bin\lib\rpcserver-all.jar\"}, {operation:\"copying contents from source to destination\", error:\"There are no more files.\", src:\"C:\Splunk\var\run\splunk\bundle_tmp\a1f2000df5b3093c\splunk_app_db_conn ect\bin\lib\", dest:\"C:\Splunk\etc\apps\splunk_app_db_connect\bin\lib\"}, {operation:\"copying contents from so urce to destination\", error:\"There are no more files.\", src:\"C:\Splunk\var\run\splunk\bundle_tmp\a1f2000df5b30 93c\splunk_app_db_connect\bin\", dest:\"C:\Splunk\etc\apps\splunk_app_db_connect\bin\"}, {operation:\"copying con tents from source to destination\", error:\"There are no more files.\", src:\"C:\Splunk\var\run\splunk\bundle_tmp\ a1f2000df5b3093c\splunk_app_db_connect\", dest:\"C:\Splunk\etc\apps\splunk_app_db_connect\"}]"}]} has anyone seen this before or know how to get around it so I can push out my updates?
Hello, I got a warning: “UTF8Processor - Using charset UTF-16LE, as the monitor is believed over the raw text which may be UTF-8” I added “CHARSET=UTF-8” in my props pushed to HF but I'm st... See more...
Hello, I got a warning: “UTF8Processor - Using charset UTF-16LE, as the monitor is believed over the raw text which may be UTF-8” I added “CHARSET=UTF-8” in my props pushed to HF but I'm still seeing the message.
Evaluating Splunk Enterprise. Added the "Splunk Add-on for Microsoft Windows." When I do a search, I get errors: • Could not load lookup=LOOKUP-action_for_WinRegistry • Could not load ... See more...
Evaluating Splunk Enterprise. Added the "Splunk Add-on for Microsoft Windows." When I do a search, I get errors: • Could not load lookup=LOOKUP-action_for_WinRegistry • Could not load lookup=LOOKUP-action_for_fs_notification In the infosec App, under security posture, I get errors like Error in 'DataModelEvaluator': Data model 'Intrusion_Detection' was not found. Error in 'DataModelEvaluator': Data model 'Malware' was not found. Not sure if related. Thanks for your advice.
So I have the following _json event that I need to wrangle into a more useful format. As you can see there are 2 key:value pairs that are related, e.g name = and value = For example I would li... See more...
So I have the following _json event that I need to wrangle into a more useful format. As you can see there are 2 key:value pairs that are related, e.g name = and value = For example I would like to combine the following into 1 field like a re-key, but do it globally for the entire source. name: target_user value: rey.skywalker@jedi.com name:target_user,value: rey.skywalker@jedi.com or target_user= rey.skywalker@jedi.com any suggestions appreciated, I tried a props and transforms on the search head with no luck.... thx in advance { [-] actor: { [-] email: kilo.ren@sith.com profileId: 100 } etag: "abcd1234" events: [ [-] { [-] name: edit parameters: [ [-] { [-] boolValue: false name: primary_event } { [-] boolValue: true name: billable } { [-] name: doc_id value: jakjd446532 } { [-] name: doc_type value: pdf } { [-] name: doc_title value: Overview.pdf } { [-] name: visibility value: shared_externally } { [-] name: owner value: kilo.ren@sith.com } { [-] boolValue: false name: owner_is_shared_drive } { [-] boolValue: false name: owner_is_team_drive } ] type: access } { [-] name: change_user_access parameters: [ [-] { [-] boolValue: true name: primary_event } { [-] boolValue: true name: billable } { [-] name: visibility_change value: external } { [-] name: target_user value: rey.skywalker@jedi.com } { [-] multiValue: [ [+] ] name: old_value } { [-] multiValue: [ [+] ] name: new_value } { [-] name: old_visibility value: private } { [-] name: doc_id value: 1d8546542318 } { [-] name: doc_type value: pdf } { [-] name: doc_title value: Overview.pdf } { [-] name: visibility value: shared_externally } { [-] name: owner value: kilo.ren@sith.com } { [-] boolValue: false name: owner_is_shared_drive } { [-] boolValue: false name: owner_is_team_drive } ] type: acl_change } ] id: { [-] applicationName: drive customerId: abcd1234 time: 2020-01-12T18:42:34.543Z uniqueQualifier: 123456 } kind: admin#reports#activity } Show as raw text
I log events from 30 devices every minute, and I'd like to be able to return a simple table of the count of events by deviceID in a given period. However, if I use something like: index="myAppli... See more...
I log events from 30 devices every minute, and I'd like to be able to return a simple table of the count of events by deviceID in a given period. However, if I use something like: index="myApplication" "myAppMessage" | chart count(deviceEvent) by deviceID I lose any deviceID results that have 0 events. That is, my statistics table might only show 28 rows if two devices were offline for the entire period. I tried to add a lookup .csv with the 30 deviceIDs listed, but if I try joining the search to the .csv: | inputlookup append=true myLookup.csv | join type=left deviceID [search index="myApplication" "myAppMessage" | lookup myLookup.csv deviceID OUTPUT sortNumber] | chart count(deviceEvent) by deviceID I get all 30 rows, but the count of deviceEvents has been lost. Actually, it returns a 1 for devices that have events (instead of 30 or 60 depending on the time picker) and 0 for devices that have no events. I first tried to join the .csv to the main search: index="myApplication" "myAppMessage" | lookup myLookup.csv deviceID OUTPUT sortNumber | chart count(deviceEvent) by deviceID But this doesn't retrieve the missing devices, despite them being present in the .csv. Is there a way to force all rows of the lookup table to be returned to statistics table?
Hello. I've set up a few Palo to Splunk instances in the past. I've never had a problem getting a syslog feed from the Palo to Splunk, port 514. Everything is set up on the FW correctly. When I tcpdu... See more...
Hello. I've set up a few Palo to Splunk instances in the past. I've never had a problem getting a syslog feed from the Palo to Splunk, port 514. Everything is set up on the FW correctly. When I tcpdump port 514, I see the traffic trying to come in but when I go to Search for source="udp:514", nothing is showing up. inputs.conf connection_host = ip sourcetype = pan:log no_appending_timestamp = true index = paloalto disabled = 0 UFW status To Action From 22/tcp ALLOW Anywhere 514 ALLOW Anywhere 80/tcp ALLOW Anywhere 443/tcp ALLOW Anywhere 514/udp ALLOW Anywhere 8000 ALLOW Anywhere 21/tcp ALLOW Anywhere 5514/udp ALLOW Anywhere 22/tcp (v6) ALLOW Anywhere (v6) 514 (v6) ALLOW Anywhere (v6) 80/tcp (v6) ALLOW Anywhere (v6) 443/tcp (v6) ALLOW Anywhere (v6) 514/udp (v6) ALLOW Anywhere (v6) 8000 (v6) ALLOW Anywhere (v6) 21/tcp (v6) ALLOW Anywhere (v6) I'm sure I'm missing something stupid here. Any thoughts? I went back to the install guide and checked off every box. Confused.