reinit notebook depot

This commit is contained in:
vincent 2020-11-26 18:38:25 +01:00
commit ac403f2028
109 changed files with 5118 additions and 0 deletions

7
.gitignore vendored Normal file
View File

@ -0,0 +1,7 @@
todoist
Quicknote.md
_docs
site
_build
note
logbook.md

56
Cuisine/idee cuisine.md Normal file
View File

@ -0,0 +1,56 @@
# Idee Cuisine
## quotidien
- Pâte bolo
- Pâte carbo
- Pâte au saumon chaud
- Pate saumon surimi froid
- Pate facon risotto copa champigon
- Pate facon risotto crevette
- Ravioli
- Rissoto
- Escalope à la crème
- Gratin dauphinois
- Salade
- Pizza
- Lasagne saumon ou bœuf
- Steak haché/cordon bleue
- Steak pomme de terre
- Ouef à la cocq
- Salade de riz au thon
- Salade composée
- Omelette
- Wrap
- Ouef au thon
- Oeuf dur aumon fume
- Quiche
- Croque monsieur
- Pain perdu
- mont d or
- tardine gourmande
- Enchiladas
- Enchiladas saumon
- terrine surimi
- vol au vent
## sauce
- bechamel
- vinaigrette
- sauce au poivre
- Sauce au maroile
- sauce a l'oseille
# preparer
- Casoulet
- Chilicon carne
- filet mignon
- poulet roti
- carbonnade
- joue de porc
- roti orloff
# desert
- clafoutie au pomme
- mouse de fraise

3
Divers/index.md Normal file
View File

@ -0,0 +1,3 @@
[site permettant d'avoir les prox d'achat de masion ou terrain](https://app.dvf.etalab.gouv.fr)
[liste api public](https://github.com/public-apis/public-apis)

0
Divers/joke.md Normal file
View File

7
Divers/lien/OpenData.md Normal file
View File

@ -0,0 +1,7 @@
# OpenData Link
[Visualisation budget France](https://budget.parlement-ouvert.fr/visualisation)
[Versionning git code de lois](https://archeo-lex.fr/codes)
[fabrique de la lois](https://www.lafabriquedelaloi.fr/)

3
Divers/lien/index.md Normal file
View File

@ -0,0 +1,3 @@
[site permettant d'avoir les prox d'achat de masion ou terrain](https://app.dvf.etalab.gouv.fr)
[liste api public](https://github.com/public-apis/public-apis)

View File

@ -0,0 +1,9 @@
# Mental Model
## Links
- [Ask HN: Which books teach mental models?](https://news.ycombinator.com/item?id=19895407)
- [Mental Models: The Best Way to Make Intelligent Decisions (109 Models Explained)](https://fs.blog/mental-models/)
- [The Great Mental Models Project](https://fs.blog/tgmm/) - Equalize opportunity in the world by making a high-quality multidisciplinary, interconnected, education free and available to everyone.
- [Model Thinking Course](https://www.coursera.org/learn/model-thinking)
- [The Power of Incentives: Inside the Hidden Forces That Shape Behavior](https://fs.blog/2017/10/bias-incentives-reinforcement/)
- [Mental Models I Find Repeatedly Useful (2016)](https://medium.com/@yegg/mental-models-i-find-repeatedly-useful-936f1cc405d)

View File

@ -0,0 +1,3 @@
>Se vouloir libre, c'est aussi vouloir les autres libres
simone de beauvoir

View File

@ -0,0 +1,26 @@
Un chevalier
sur son coursier,
jour et nuit au grand galop
, courut longtemps
tout en chantant.
Il cherchait l'El Dorado,
au-delà des monts
les plus hauts,
dans les profondeurs de la terre,
poursuivant sa chimère !
A moitié mort,
marchant encore,
il vit une ombre au grand galop.
Il lui cria, "
Ombre dis-moi, suis-je encore loin d'El Dorado ?
"
"
Au-delà des monts
les plus hauts,
dans les profondeurs de la terre,
chevauche sans repos,
dit l'ombre solitaire,
si tu cherches El Dorado !
"

View File

@ -0,0 +1,4 @@
Je ne connaîtrai pas la peur, car la peur tue l'esprit. La peur est la petite mort qui conduit à l'oblitération totale. J'affronterai ma peur. Je lui permettrai de passer sur moi, au travers de moi. Et lorsqu'elle sera passée, je tournerai mon œil intérieur sur son chemin. Et là où elle sera passée, il n'y aura plus rien. Rien que moi.
I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.

View File

@ -0,0 +1,18 @@
# replique
- C'est Pas piquer des hanneton
- on est pas la pour embrasser la princesse
- on est pas la pour enfiller des perles
- On n'est pas là pour sucer des glaçons
## audiart
- Le jour où on mettra les cons sur orbite, t'as pas fini de tourner
- Les cons, ça ose tout. C'est même à ça qu'on les reconnaît
- Dans la vie on partage toujours les emmerdes, jamais le pognon.
## inspecteur harry
> - Lorsque je vois un adulte du sexe masculin courir derrière une femelle avec lintention évidente de la violer, je le descends avant, cest ma politique.
> - Lintention ? Elle restait à établir.
> - Quand un gars à poil court derrière une fille la queue en lair avec un couteau de boucher à la main, cest drôle, jai peine à croire quil est en train de quêter pour la croix rouge.
- les avis c'est comme les trous du cul tout le monde en a un

View File

@ -0,0 +1 @@
République. Jaime le son de ce mot. Il signifie que les gens peuvent vivre libres, parler comme bon leur semble, aller et venir, acheter ou vendre, senivrer ou rester sobres, selon leurs désirs. Certains mots vous font vibrer. République est lun de ces mots qui me nouent la gorge la même sensation qui sempare du père voyant voit son bébé faire ses premiers pas ou son premier enfant entreprendre de se raser et parler comme un homme. Certains mots vous réchauffent le cœur. République est lun de ces mots.

View File

@ -0,0 +1,4 @@
Voyez cela je vois mon père.
Voyez cela je vois ma mère et mes sœurs et mes frères.
Voyez cela je vois tous mes ancêtres qui sont assis et me regardent.
Et voilà, voilà quils mappellent et me demandent de prendre place à leurs côtés dans le palais de Valhalla là ou les braves vivent à jamais.

1
IT/JS/index.md Normal file
View File

@ -0,0 +1 @@
[Javascript without JQuery](https://github.com/nefe/You-Dont-Need-jQuery)

0
IT/JS/vuejs.md Normal file
View File

Binary file not shown.

BIN
IT/SQL/66Classeur1.pdf Normal file

Binary file not shown.

784
IT/SQL/FormationSQL.md Normal file
View File

@ -0,0 +1,784 @@
# formation SQL SERVER
<!-- @import "[TOC]" {cmd="toc" depthFrom=1 depthTo=6 orderedList=false} -->
<!-- code_chunk_output -->
- [formation SQL SERVER](#formation-sql-server)
- [Vocabulaire](#vocabulaire)
- [historique de SQL server](#historique-de-sql-server)
- [Particularité de SQL Server](#particularité-de-sql-server)
- [nature des SGBDR](#nature-des-sgbdr)
- [conception de Codd](#conception-de-codd)
- [relation](#relation)
- [attributs](#attributs)
- [atomicité](#atomicité)
- [algébre relationell](#algébre-relationell)
- [migration de base](#migration-de-base)
- [instaltion/Configuration](#instaltionconfiguration)
- [patch](#patch)
- [choix immuable lors de l'instalation](#choix-immuable-lors-de-linstalation)
- [different service](#different-service)
- [instance](#instance)
- [configuration minimal](#configuration-minimal)
- [configuration Base](#configuration-base)
- [base system](#base-system)
- [renomer un serveur SQL](#renomer-un-serveur-sql)
- [outils](#outils)
- [SQLcmd.exe](#sqlcmdexe)
- [table d'admin](#table-dadmin)
- [sys.object](#sysobject)
- [table instance](#table-instance)
- [table base](#table-base)
- [stockage](#stockage)
- [groupe de fichier](#groupe-de-fichier)
- [definition de fichier](#definition-de-fichier)
- [planification de capacité](#planification-de-capacité)
- [disque](#disque)
- [SSD](#ssd)
- [machine virtuel](#machine-virtuel)
- [reduction de fichier](#reduction-de-fichier)
- [transaction](#transaction)
- [partitionnement](#partitionnement)
- [Type](#type)
- [Collonne calculée](#collonne-calculée)
- [Snapshot](#snapshot)
- [import export](#import-export)
- [Bulk INSERT](#bulk-insert)
- [SSIS](#ssis)
- [Schema SQL](#schema-sql)
- [Securité](#securité)
- [compte connexion](#compte-connexion)
- [Utilisateur SQL](#utilisateur-sql)
- [privilége](#privilége)
- [roles](#roles)
- [chifrement](#chifrement)
- [Tache d'administratrion](#tache-dadministratrion)
- [agent SQL](#agent-sql)
- [Database Mail](#database-mail)
- [Travaux](#travaux)
- [monitoring](#monitoring)
- [disque](#disque-1)
- [transaction](#transaction-1)
- [cache](#cache)
- [DBCC](#dbcc)
- [integrité des bases](#integrité-des-bases)
- [réparation base endommagée](#réparation-base-endommagée)
- [index](#index)
- [Sauvegarde](#sauvegarde)
- [mode de récupération](#mode-de-récupération)
- [syntaxe de commande](#syntaxe-de-commande)
- [Urgence](#urgence)
- [multifichier](#multifichier)
- [compresioon](#compresioon)
- [lire une sauvegarde](#lire-une-sauvegarde)
- [surveillance et performance](#surveillance-et-performance)
- [Historisation des données](#historisation-des-données)
- [verrouillages et blocages](#verrouillages-et-blocages)
<!-- /code_chunk_output -->
[benchmark professionnelle](http://www.tpc.org/information/benchmarks.asp)
## Vocabulaire
- SGBD:Système de gestion de base de données
- page: unité de stockage minimal de 8ko
- schema SQL: unité de stockage logique conteneur d'objet relationnelle (dbo est un schema) = namespace
- extention: block de 8 pages = 64ko
- lobs: large object
GO: séparateur de batch qui s'adresse au client il execute les requéte d'avant pour être sur que la requête c'est bien éxecuter
over():
## historique de SQL server
1974: ingres
1984: sybase (premier SGBD client serveur)
1988: accord sysbase/microsoft
1989: sortie de SQL server Unix
1993: rupture avec sysbase sortie de la premiére version de SQL server full windows
1999: version 7 premiére version professionnelle
2005: refonte monté en gamme
2012: SQL server devient plus performant que oracle
## Particularité de SQL Server
- information Schema: select * from information_Schema.Tables -> permet d'avoir la liste des tables
- multischema: chaque base peu contenire un nombre indeterminé de schema
- posibilité de requéter des base sur le meme serveur (oracle postgress foive utilisé un dblink)
- chaque base dispose son journal de transaction (oracle postgress un journal pour tous)
- transaction: lot de commande
- In Memory: plus d'écriture dans le journal de transaction (a utilisé en mode hight availlibility)
- strecht table: stockage de table dans le cloud
## nature des SGBDR
## conception de Codd
- Séparation HArdware/logique
- algébre relationelle
- insersion supression modification emsembliste (peut mettre a jour plusieur donné dans une seule requête)
### relation
- posséde un nom
- possésde une collection d'attributs
- posséde une clef primaire
- peut posséder d'autre clefs
- valeur atomique
- **pas de relation sans clef primaire**
### attributs
- non unique au sein de la relation
- **pas de null**
- fait partie d'un domaine (valeur possible de l'attribut)
### atomicité
donnée atomique: ne peut pas être subdiviser sans perte de sens
en base de donné il faut stockés les donnés au maximum de facon atomique (exemple un numéro de sécurité social est divisable en plusieurs informations)
## algébre relationell
**pas de jointure dans le where**
ordre d'exécution des requétes: *from,where,group by,having,select,orderby*
## migration de base
- de 7 a 2000
- 2000 a 2005 ou 2008
- 2005 - now
## instaltion/Configuration
- classement: french point de code binaire
- authent: mode mix
- **formatage du disque en cluster de 64KO pour s'aligner sur la taille de l'unité d'extention**
- pas de raid 5 6 faut mieux se limiter au raid 0 1 10
- **pas d'outil instalé sur le serveur**
### patch
- <2016 attendre la demande de microsft
- \>2016 faire au plus vite
### choix immuable lors de l'instalation
- répertoire des exe
- répertoire base systeme
- collation de base
### different service
- SQL server Broswer: permet de naviger dans les instance par non au lieux de port (recommandé de désactiver sur la prod)
- gérée les service via SQL Server Configuration Manager
### instance
- port 1433 si une seul instance
- **Ne jamais redemmarrer une instance sauf absolue neccesité**
#### configuration minimal
- max server memory
- cost threshold for parallelism -> cout pour que le moteur étudie le paralémlisme ->min 12 sur PC perso, 25 et 100 sur prod
- max degree of parrallelism -> nombre maximum de thread parralléle recommandation sur grosse machine 1/4 des coeur -1 sur du transactionnelle, plus sur du decissionelle
- optimize for ad hoc workloads:1
- Backup compression default:1 gain sur la vitesse de compression cpu vs vitesse écriture disque sauf si base sont setter en 'transparente data encryption' (chiffrage des fichier de la base)
- Backup checksum default',1
### configuration Base
**ne jamais mettre en ON:**
- AUTO_CLOSE
- AUTO_SHRINK
**mettre a ON:**
- AUTO_CREATE_STATISTICS
- AUTO_UPDATE_STATISTICS
- pour version entreprise: AUTO_UPDATE_STATISTICS_ASYNC
activé l'autonomie partielle pemet de créer des compte user dédiéet de réglé des probléme de collation
```SQL
ALTER DATABASE [DB_GRAND_HOTEL] SET CONTAINMENT = PARTIAL WITH NO_WAIT
```
### base system
**visible:**
- master: liste des bases,connexions,lessage d'erreur (a sauvegarder impérativement tous les jour)
- model: template pour création des basrs
- msdb: gestion des travaux SQL SERVER (a sauvegarder impérativement tous les jour)
- tempdb: objet temporaire
**invisible:**
- mssqlsystemresource: routine systéme
**optionel:**
- ditribution (réplication)
- semanticdb (recherche sémantique)
### renomer un serveur SQL
- renomer le serveur au niveau de windows
- changer le nom dans master
```SQL
select * from sys.servers
exec sp_dropserver ''
exec sp_addserver '', local
```
- redemmarré le service
## outils
- SQL server Management Studio (ne pas utilisé sur une instance deffectueuse)
- SentryONe Plan Explorer (add on permetant d'améliorer le plan d'éxécution des requête)
- Gestionnaire de configuration SQL Server
- apexlog visualisation de log
### SQLcmd.exe
utilitaire en ligne de commande
option:
- *-S* nom du serveur
- *-U" login
- *-P* MDP
- *-E* Sécurité intégré windows
- *-q* tamponne unse requéte (go pour lancer)
- *-Q* lance une requête
- *-A* mode DAC (Dedicated Administrative Connector) connection d'urgence permet de mobiliser les resource pour la connection a utiliser en local
## table d'admin
### sys.object
- contient tous les objet de la base
- *is_ms_shipped* permet de voir les objet system
### table instance
- sys.servers
- sys.databases
- sys.master_files
- sys.configuration
### table base
- sys.table
- sys.schema
- sys.database_files
- sys.indexes
- sys.index_column
- sys.stats
## stockage
considérer les fichier comme si s'était des disque d'OS
deux fichier:
- mdf fichier de donnée (Master Data File)
- ldf journal de transaction (Log Data File)
### groupe de fichier
possibilité de cré diferent group de fichier pour la même base par example
mettre les collonnes purement relationelle dans un group dans un groupe de fichier X
mettre les donné de type blob dans un groupe de fichier spécifique
### definition de fichier
name
filename
Size
MaxSize
Filegrowth
- les taille sont exprimé en KB,MB,GB
- pas de croisance en pourcentage.
### planification de capacité
- journale de transaction 20% des donnée pour les grosse base,40% pour les petitte
- ne pas placez les table ddans le groupe primary
- creer un groupe de fichier blob
- créer plusieur fichier par groupe de fichier en fonction de disques physique
### disque
- raid 0 ou 1 uniquement
#### SSD
optmisation des écriture necesaire **need disk write intensive**
#### machine virtuel
vaux mieux utiliser le stockage non virtuel
non recommandé pour utilisation particuliére
#### reduction de fichier
DBCC SHRINKFILE
options:
- EMPTYFILE: permet de vider le fichier (besoin d'un autre fichier dans le groupfile pour transférer les donnés)
### transaction
aficher le journal de transaction
`select * from sys.fn_dblog(null, null)`
fichier de transaction est le plus sensible a la vitesse d'écriture
3 mode de journalisation:
- Full: journalisation max, aucune purge
- BULK logged: journalisation min, aucune purge
- Simple: journalisation min, purge auto
les deux premier necesitte des savegarde transactionelle pour purger
dans base transactionelle utilisé le mode full
dans les base decisionnelle BI plutot simple
frequence de sauvegarde du journal de transaction:20 a 30 min
si JT plein passez en mode simple.
` alter database [toto] set recovery simple `
### partitionnement
permet de diminuer ou ventiler la plage es données scrutées:
- les recherches
- les mises a jour (prallélisme)
1 création des espace de stockage
2 creation de la fonction de partitionnement
create PArtition Function PF_Date_facture (Datetime2(0))
```SQL
AS RANGE Right
For values ('2018-01-01','2019-01-01','2020-01-01');
```
3 création du schema de répartition
```SQL
Create PARTITION SCHEME PS_DATE FACTURE
as PARTITION PF_DATE_FACTURE
to (FG_OLD,FG_2018,FG2019,FG2020)
```
**on ne peut pas avoir une table stocker sous forme de cluster et partitioner pour cela il
faut supprimer le cluster et recréer un cluster sur la partition**
4 création de l'objet sur la partition
```SQL
create table (...)
ON PS_DATE_FACTURE (coll_critere)
create INDEX (...)
ON PS_DATE_FACTURE (coll_critere)
```
## Type
litteraux: *char,varchar,nchar,nvarchar*
Numérique: *int,smallint,bigint,tinyint,decimal,numeric,float,real*
time: *datetime(obsolete),**datetime2**,time,datetimeoffset*
Binaire: *bit,binary,varbinary,hierarchyID,uniqueindetifier*
Lobs:
- CLOB => varchar(max)
- NCLOB => NVARCHAR(max)
- BLOB => VARBINARY(max), XML, GEOGRAPHY, GEOMETRY
## Collonne calculée
## Snapshot
limité édition enterprise
photographie de la base de données (assimilé a un journal de trasaction inversé).
a utilisé pour les grosse base avec un serveur qui tien la route car le serveur doit mettre a jour le snapshot a chaque modification de la base.
## import export
### Bulk INSERT
- CHECK_CONSTRAINTS: verifie les contraintes Check et FK
- FIRE_TRIGGER: execute les déclencheur
- KEEPIDENTITY: force la valeur dans les auto increments
- KEEPNULLS: collonnes vides transformées en NULL
- BATCHSIZE: spécifie la taille du lot de données
- KILOBYTES_PER_BATCH: Découpe le lot par volume
- ROWS_PER_BATCH: Découpe le lot par ligne
### SSIS
## Schema SQL
Espace de stockage des objet relationelle
- schema par default de la base: dbo
deux niveaux:
- un schema par default est attribuer a un utilisateur
- un schema par default de la base
creation
```SQL
create schema nom_schema
Authorization = nom_utilisateur
```
transfert
```SQL
Alter SCHEMA nom_schema
TRANSFER nom_objet
```
delete
```SQL
DROP SCHEMA nom_Schema
```
## Securité
### compte connexion
create SQL login :
```SQL
create LOGIN CNX_SQL_LAMBDA WITH PASSWORD = 'AZERT123'
```
create WIndows login :
```SQL
create LOGIN CNX_SQL_LAMBDA From windows
with DEFAULT_DATABASE=toto,DEFAULT_LANGUAGE= us_english
```
### Utilisateur SQL
Utilisateur SQL autonome
deux cas:
- utilisateur sans compte de connexion
- utilisateur se conenctant directement a la base
pour le dernier cas la base doit être paramétrée en *contained*
```SQL
ALTER DATABASE [DB_GRAND_HOTEL] SET CONTAINMENT = PARTIAL WITH NO_WAIT
```
### privilége
afficher priovilége
```SQL
select * FROM sys.fn_builtin_permissions(null)
ORDER BY parent_class_desc,covering_permission_name, class_desc
```
```SQL
GRANT <liste de privilèges>
ON <objet a privilégier>
to <liste des entité bénéficiaire>
DENY EXECUTE
ON SCHEMA::[dbo]
TO [USR_ECRIVAIN]
```
### roles
permet de presetter des proviléges
sysadmin,bulkadmin,dbcreator,setupadmin,securityadmin,serveradmin
metadonné:
sys.server_principals
sys.server_permissions
role predefinit de database: db_owner,db_datareader,db_datawriter,db_denydatareader,db_denydatawriter,db_securityadmin,db_accessadmin,db_ddladmin,db_backupoperator
public: roles de base que tous les utilisateur on
revoir les fonction de sécurité sur le diapo
## chifrement
revoir les fonction de sécurité sur le diapo
## Tache d'administratrion
### agent SQL
planificateur de tache associé a SQL SERVER.
les donnée de l'agent sont stockés dans la base msdb.
on y trouve des jog et des alerte
#### Database Mail
permet d'envoyer des mail par le biais du moteur SQL
### Travaux
ensemble d'étape a réaliser
s'execute par default sur le compte de service de l'agent
sur le compte de Service SQL server pour les Transactions
pour les extention de droit on doit utiliser un "proxy"
### monitoring
#### disque
saturation disque
taux de remplisage des fichier de bases
drossisement des fichier SQL des bases
activité IO de disque
#### transaction
longue transaction
blocages
verrou mortels
#### cache
rémanence des pages en cache
taux de mise en cache
### DBCC
DAtabase console Command
utilitaire permettant
- obtenir des informations
- modifier le comportement du moteur
- effectuer la maintenance bas nivaux
#### integrité des bases
- CHECKALLOC
- CHECKCATALOG
- CHECKTABLE
- CHECKFILEGROUP
- CHECKDB
- CHECKCONSTRAINTS
#### réparation base endommagée
lire le diagnostique mais
- ne pas ce précipitez sur la commandes REPAIR
- Commencez par revoir le support (disque)
- Déplacer la base si besoin
- si l'objet endommagé est un index non clustered, supprimez et recréez
- si c'est une table vous pouvez peut être récupérer les pages endommagées (RESTORE PAGE)
si repair_allow_DATA_LOSS est utilisé:
- vérifier les contraintes (notamment FK) sur les tables réparées (checkconstraint)
- supprimez les lignes orpheline a l'aide de requéte (semi aniti joiture left)
- désactiver préalablement les contraints FK
## index
- Primary Key création index cluster par default
- foreign key: ne neccesite pas tous le temps un index
structure de donnée redondante pour acccélérer certaine recherche
relationelle:
- B-tree+
- hash
- BW-Tree
Analyptique
- ColumnStore
PArticuler
- XML
- GEO
- full TEXT
les index B-tree peut être utilisé par
- =
- < > >=>=
- Between
- LIKE 'blalal%'
- Group By
- Order by
2 type d'index:
- clustered table triée par la cled d'index (un par table comme il represente la table)
- NONCLUSTERED: index avec copie des données
ce dernior neccesite un renvoie pour retrouver la ligne originelle de la table
- par un row ID (si pas de index clustered)
- la valeur de la cles clustered
le row ID contien 3 info:
- numéro de fichier dans la base (file_id)
- numéro de page dans le fichier
- numéro de slot de ligne dans la page
les clef d'index clustered devrait:
- être unique
- Not Null
- être la plus petite possible
- ne pas pouvoir être modifié
- toute table doit avoir une clés PRIMAIRE
- la clef Primaire devrait être un collonne IDENTITY
pour voir la fragmentation des index
```SQL
select * from sys.dm_db_index_physical_stats(NULL,NULL,NULL,NULL)
where avg_fragmentation_in_percent > 10 and page_count > 64
```
pour défragmenter
```SQL
alter index .... reorganize -> pas bloquant mais pas top
alter index ..... Rebuild -> bloquant mais pas top (nom bloquant si verzsion entreprise et mode online)
```
## Sauvegarde
consite a dupliquer les données contenu dans un systeme inforamtique de maniére a pouvoir le restaurer
deux conception:
- sauvegarde pour pallier a l'erreur humaine
- sauvegarde pour archivage
3 mode de sauvegarde:
- complet:
- Enregistre l'intéfrité de la base de données a l'huere de fin de la sauvegarde compléte.
- Laisse le journal tel quel.
- initialise les sauvegarde diferentielles et du journal de transaction.
- diferentille: enregistre les pages modifiées depuis la derniére sauvegarde compléte et les transactions qui ont eu lieu pendant la sauvegarde
- journal de transactions: enregistre mes transactions qui ont eu lieu depuis la derniére sauvegarde
### mode de récupération
- full (par default) obligatoire en haute dispo
- bulk logged
- simple
### syntaxe de commande
complete
BAckup DATABASE
differentiel
Backup DATABASE WITH DIFFERENTIAL
transactionelle
BAckup LOG
### Urgence
**COPY_ONLY**: effectue une sauvegarde dans inscription dans le plan de sauvegarde
### multifichier
il est possible d'envoyer la sauvegarde sur plusieur fichier en paralléle chaque ficcier contien une partie de la sauvegarde
il est aussi possible d'envoyer la sauvegarde sur diferent device
### compresioon
WITH COMPRESSION
gain sur le temps de sauvegarde
gain important sur le temps de restauration
gain importants sur le volume
### lire une sauvegarde
RESTORE LABELONLY: donne des info sur lr média de sauvegarde
RESTORE HEADERONLY: liste les fichier de sauvgarde dans le média
RESTORE FILELISTONLY list les fichier restaurable pour l'in des sauvegarde contenue
RESTORE VERIFYONLY: controle la sauvegarde
## surveillance et performance
### Historisation des données
Tables temporelles:
- Table Temporalisée: une table de production dont on rajoute 2 collonnes techniques qui peuvent être "hidden" et qui sont alimentées automatiquement a chaque insert update
- table d'historisation: table technique contenant l'historique des évolution de chacune des lignes de la table a laquelle elle est associée.
l'intervalle de temps est ouvert a droite fermée a gauche
interrogation temporelle
- as-of
- from
- between
- contained in
- all
### verrouillages et blocages
mode de verrouillage pessimiste par default
- une écriture bloque la lecture et l'écriture d'autre processus
- le verrouillafe peut entrainner des: wait,Blocage,deadlock

View File

@ -0,0 +1,25 @@
-- DISQUES
-- 1 saturation des disques
-- 2 taux de remplissage des fichiers des bases
-- 3 grossissement des fichiers SQL des bases
-- 4 activité IO des disques (latence)
-- TRANSACTIONS
-- 1 longues transactions (oui, mais quelle durée ??? dépend des applications...)
-- 2 blocages
-- 3 verrous mortels
-- CACHE
-- rémanence des pages en cache
-- taux de mise en cache

View File

@ -0,0 +1,12 @@
DECLARE @SQL NVARCHAR(max) = N'';
SELECT @SQL = @SQL + 'DBCC CHECKDB ([' + name + ']) WITH NO_INFOMSGS;'
FROM sys.databases
WHERE state = 0
AND source_database_id IS NULL
AND name NOT IN ('tempdb', 'model');
EXEC (@SQL);

View File

@ -0,0 +1,42 @@
CREATE OR ALTER TRIGGER E_DDL_CREATE_TABLE
ON DATABASE
FOR CREATE_TABLE
AS
BEGIN
DECLARE @XML XML = EVENTDATA();
-- contrôle du nom
IF @XML.value('(/EVENT_INSTANCE/ObjectName)[1]', 'sysname') NOT LIKE 'T?_%' ESCAPE('?')
OR @XML.value('(/EVENT_INSTANCE/ObjectName)[1]', 'sysname') NOT LIKE '%_[A-Z][A-Z][A-Z]' ESCAPE('?')
BEGIN
ROLLBACK;
THROW 66666, 'Le nom d''une table doit être préfixé par "T_" et suffixé par un trigramme, par exemple "_ABC"', 1;
END;
-- utilisation de types obsolètes
IF EXISTS(SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @XML.value('(/EVENT_INSTANCE/ObjectName)[1]', 'sysname')
AND TABLE_SCHEMA = @XML.value('(/EVENT_INSTANCE/SchemaName)[1]', 'sysname')
AND DATA_TYPE IN ('text', 'ntext', 'image', 'datetime', 'smalldatetime'))
BEGIN
ROLLBACK;
THROW 66666, 'les colonnes d''une table ne doivent pas comporter de type de données obsolètes ("text", "ntext", "image", "datetime", "smalldatetime").', 1;
END;
END;
GO
CREATE TRIGGER E_DDL_CREATE_TABLE
ON ALL SERVER
FOR CREATE_DATABASE
AS
BEGIN
DECLARE @XML XML = EVENTDATA();
IF @XML.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'sysname') NOT LIKE 'DB?_%' ESCAPE '?'
BEGIN
ROLLBACK;
THROW 66666, 'Le nom d''une base de données doit être préfixé par "DB_".', 1;
END;
END;
GO

View File

@ -0,0 +1,48 @@
SELECT mf.*, volume_mount_point, 1.0 * total_bytes / POWER(1024, 3) AS DISK_SIZE_GB,
1.0 * available_bytes / POWER(1024, 3) AS FREE_SIZE_GB,
100 * (1 - 1.0 * available_bytes/total_bytes) PERCENT_USED
FROM sys.master_files AS mf
CROSS APPLY sys.dm_os_volume_stats(database_id, file_id);
WITH
disk_activity AS
(SELECT LEFT(mf.physical_name, 2) AS Drive,
SUM(num_of_reads) AS num_of_reads,
SUM(io_stall_read_ms) AS io_stall_read_ms,
SUM(num_of_writes) AS num_of_writes,
SUM(io_stall_write_ms) AS io_stall_write_ms,
SUM(num_of_bytes_read) AS num_of_bytes_read,
SUM(num_of_bytes_written) AS num_of_bytes_written, SUM(io_stall) AS io_stall
FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS vfs
INNER JOIN sys.master_files AS mf WITH (NOLOCK)
ON vfs.database_id = mf.database_id AND vfs.file_id = mf.file_id
GROUP BY LEFT(mf.physical_name, 2))
SELECT (SELECT sqlserver_start_time FROM sys.dm_os_sys_info) AS SINCE,
Drive AS DISK_DRIVE,
CASE
WHEN num_of_reads = 0 THEN 0
ELSE (io_stall_read_ms/num_of_reads)
END AS READ_LATENCY,
CASE
WHEN io_stall_write_ms = 0 THEN 0
ELSE (io_stall_write_ms/num_of_writes)
END AS WRITE_LATENCY,
CASE
WHEN (num_of_reads = 0 AND num_of_writes = 0) THEN 0
ELSE (io_stall/(num_of_reads + num_of_writes))
END AS GLOBAL_LATENCY,
CASE
WHEN num_of_reads = 0 THEN 0
ELSE (num_of_bytes_read/num_of_reads)
END AS AVG_BYTES_PER_READ,
CASE
WHEN io_stall_write_ms = 0 THEN 0
ELSE (num_of_bytes_written/num_of_writes)
END AS AVG_BYTES_PER_WRITE,
CASE
WHEN (num_of_reads = 0 AND num_of_writes = 0) THEN 0
ELSE ((num_of_bytes_read + num_of_bytes_written)/(num_of_reads + num_of_writes))
END AS AVG_BYTES_PER_TRANSFER
FROM disk_activity AS tab
ORDER BY GLOBAL_LATENCY
OPTION (RECOMPILE);

View File

@ -0,0 +1,47 @@
USE master;
GO
CREATE PROCEDURE dbo.sp__DB_SNAP @DB NVARCHAR(128), @PATH NVARCHAR(256)
AS
IF NOT EXISTS(SELECT *
FROM master.sys.databases
WHERE name = @DB
AND source_database_id IS NULL
AND state_desc = 'ONLINE')
BEGIN
RAISERROR('Le nom de base %s n''existe pas sur ce serveur ou n''est pas en état copiable.', 16, 1, @DB);
RETURN;
END;
IF RIGHT(@PATH, 1) = '\'
SET @PATH = @PATH + '\';
DECLARE @T TABLE (file_exists bit, file_is_dir bit, parent_dir_exists bit);
INSERT INTO @T
EXEC master.sys.xp_fileexist @PATH;
IF NOT EXISTS(SELECT 0
FROM @T
WHERE file_is_dir = 1)
BEGIN
RAISERROR('Le chemin passé en arguement, n''est pas un répertoire valide.' , 16, 1);
RETURN
END
DECLARE @SQL VARCHAR(MAX);
SET @SQL = 'CREATE DATABASE [' + @DB +'_SNAP_'
+ REPLACE(REPLACE(REPLACE(REPLACE(CONVERT(NVARCHAR(23), CURRENT_TIMESTAMP, 121), '-', ''), ' ', '_'), ':', ''), '.', '_')
+ '] ON '
SELECT @SQL = @SQL + '( NAME = ' + name +', FILENAME = '''
+ @PATH + REVERSE(SUBSTRING(REVERSE(physical_name), 1, CHARINDEX('\', REVERSE(physical_name)) - 1))
+ '''),'
from sys.master_files
WHERE type = 0
AND database_id = DB_ID(@DB)
SET @SQL = SUBSTRING(@SQL, 1, LEN(@SQL) - 1) + ' AS SNAPSHOT OF ['
+ @DB + ']'
EXEC (@SQL)
GO
EXEC sp_MS_marksystemobject 'sp__DB_SNA'

View File

@ -0,0 +1,13 @@
CREATE TRIGGER E_LOGON_LIMIT_SA
ON ALL SERVER
FOR LOGON
AS
BEGIN
IF EXISTS(SELECT 1
FROM sys.dm_exec_sessions
WHERE is_user_process = 1
AND ORIGINAL_LOGIN()= 'sa'
AND original_login_name = 'sa'
HAVING COUNT(*) > 1)
ROLLBACK;
END;

View File

@ -0,0 +1,42 @@
SELECT *
FROM sys.dm_exec_connections
OUTER APPLY sys.dm_exec_sql_text(most_recent_sql_handle);
SELECT *
FROM sys.dm_exec_sessions AS s
LEFT OUTER JOIN sys.dm_exec_connections AS c
ON s.session_id = c.session_id
CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle);
SELECT *
FROM sys.dm_exec_sessions AS s
LEFT OUTER JOIN sys.dm_exec_connections AS c
ON s.session_id = c.session_id
LEFT OUTER JOIN sys.dm_exec_requests AS r
ON s.session_id = r.session_id
OUTER APPLY sys.dm_exec_sql_text(sql_handle)
OUTER APPLY sys.dm_exec_query_plan(plan_handle);
--> bloqueurs et bloqués
SELECT *
FROM sys.dm_exec_sessions AS s
LEFT OUTER JOIN sys.dm_exec_connections AS c
ON s.session_id = c.session_id
LEFT OUTER JOIN sys.dm_exec_requests AS r
ON s.session_id = r.session_id
OUTER APPLY sys.dm_exec_sql_text(sql_handle)
OUTER APPLY sys.dm_exec_query_plan(plan_handle)
WHERE s.session_id IN (SELECT blocking_session_id
FROM sys.dm_exec_requests AS r
WHERE blocking_session_id > 0)
UNION ALL
SELECT *
FROM sys.dm_exec_sessions AS s
LEFT OUTER JOIN sys.dm_exec_connections AS c
ON s.session_id = c.session_id
LEFT OUTER JOIN sys.dm_exec_requests AS r
ON s.session_id = r.session_id
OUTER APPLY sys.dm_exec_sql_text(sql_handle)
OUTER APPLY sys.dm_exec_query_plan(plan_handle)
WHERE blocking_session_id <> 0;

View File

@ -0,0 +1,23 @@
-- diagnotsic des index a créer
--!!!!!!!!!!!!!!!!!!!!!!!!! ATTENTION : ne pas utiliser si moins de 30 jours de fonctionnement CONTINU du SGBDR
SELECT sqlserver_start_time FROM sys.dm_os_sys_info
-- 2 moyens
-- 1) grossier : je prend le top n des index qui me font ganger le MAX
SELECT TOP (SELECT COUNT(*) / 3 FROM [dbo].[sys_dm_db_missing_index_details])
statement, equality_columns, inequality_columns, included_columns
FROM [dbo].[sys_dm_db_missing_index_details] AS mid
JOIN [dbo].[sys_dm_db_missing_index_groups] AS mig ON mid.index_handle = mig.index_handle
JOIN [dbo].[sys_dm_db_missing_index_group_stats] AS mis ON mig.index_group_handle = mis.group_handle
ORDER BY statement, equality_columns, inequality_columns, included_columns
-- 2) analyse des pertinence des index à créer...
SELECT statement, equality_columns, inequality_columns, included_columns
FROM [dbo].[sys_dm_db_missing_index_details] AS mid
ORDER BY statement, COALESCE(equality_columns+ ', ' + inequality_columns, equality_columns, inequality_columns)
CREATE INDEX XYZ ON [Anthadine_prod].[dbo].[C_C_CSTPAT] ([CSTTYP_CODE], [INDI_CODE] [CSTPAT_DATE], [CSTPAT_ESTREF])
INCLUDE ([CSTPAT_CODE], [UTIL_INDI_CODE], [CSTPAT_HEURE], [IDENT], [CSTPAT_TEXTE], [CSTPAT_VALE], [CSTPAT_ESTREF], [TRACE_INS_INDI_CODE], [TRACE_UPD_INDI_CODE], [TRACE_INS_DATE], [TRACE_UPD_DATE], [TRACE_FLAG_DELETE], [EFEV_CODE])

View File

@ -0,0 +1,78 @@
WITH
-- sous requête CTE donnant les index avec leurs colonnes
T0 AS (SELECT ic.object_id, index_id, c.column_id, key_ordinal,
CASE is_descending_key
WHEN '0' THEN 'ASC'
WHEN '1' THEN 'DESC'
END AS sens, c.name AS column_name,
ROW_NUMBER() OVER(PARTITION BY ic.object_id, index_id ORDER BY key_ordinal DESC) AS N,
is_included_column
FROM sys.index_columns AS ic
INNER JOIN sys.columns AS c
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
WHERE key_ordinal > 0
AND index_id > 0),
-- sous requête CTE récursive composant les clefs des index sous forme algébrique et littérale
T1 AS (SELECT object_id, index_id, column_id, key_ordinal, N,
CASE WHEN is_included_column = 0 THEN CAST(column_name AS VARCHAR(MAX)) + ' ' + sens ELSE '' END AS COMP_LITTERALE,
CASE WHEN is_included_column = 0 THEN CAST(column_id AS VARCHAR(MAX)) + SUBSTRING(sens, 1, 1) ELSE '' END AS COMP_MATH,
MAX(N) OVER(PARTITION BY object_id, index_id) AS CMAX,
CASE WHEN is_included_column = 1 THEN CAST(column_name AS VARCHAR(MAX)) ELSE '' END AS COLONNES_INCLUSES
FROM T0
WHERE key_ordinal = 1
UNION ALL
SELECT T0.object_id, T0.index_id, T0.column_id, T0.key_ordinal, T0.N,
COMP_LITTERALE +
CASE WHEN is_included_column = 0 THEN ', ' + CAST(T0.column_name AS VARCHAR(MAX)) + ' ' + T0.sens ELSE '' END,
COMP_MATH +
CASE WHEN is_included_column = 0 THEN CAST(T0.column_id AS VARCHAR(MAX)) + SUBSTRING(T0.sens, 1, 1) ELSE '' END,
T1.CMAX, COLONNES_INCLUSES + CASE WHEN is_included_column = 1 THEN ', ' + CAST(column_name AS VARCHAR(MAX)) ELSE '' END
FROM T0
INNER JOIN T1
ON T0.object_id = T1.object_id
AND T0.index_id = T1.index_id
AND T0.key_ordinal = T1.key_ordinal + 1),
-- sous requête CTE de dédoublonnage
T2 AS (SELECT object_id, index_id, COMP_LITTERALE, COMP_MATH, CMAX, COLONNES_INCLUSES
FROM T1
WHERE N = 1),
-- sous requête sélectionnant les anomalies
T4 AS (SELECT T2.object_id, T2.index_id,
T3.index_id AS index_id_anomalie,
T2.COMP_LITTERALE AS CLEF_INDEX,
T3.COMP_LITTERALE AS CLEF_INDEX_ANORMAL,
T2.COLONNES_INCLUSES, T3.COLONNES_INCLUSES AS COLONNES_INCLUSES_ANORMAL,
CASE
WHEN T2.COMP_MATH = T3.COMP_MATH
THEN 'DOUBLONS'
WHEN T2.COMP_MATH LIKE T3.COMP_MATH +'%'
THEN 'INCLUS'
END AS ANOMALIE,
ABS(T2.CMAX - T3.CMAX) AS DISTANCE
FROM T2
INNER JOIN T2 AS T3
ON T2.object_id = T3.object_id
AND T2.index_id <> T3.index_id
AND T2.COMP_MATH LIKE T3.COMP_MATH +'%')
-- Requête finale rajoutant les informations manquantes
SELECT T4.*,
s.name +'.' + o.name AS NOM_TABLE,
i1.name AS NOM_INDEX,
i2.name AS NOM_INDEX_ANORMAL
--, i1.filter_definition AS FILTRE_INDEX
--, i2.filter_definition AS FILTRE_INDEX_ANORMAL
FROM T4
INNER JOIN sys.objects AS o
ON T4.object_id = o.object_id
INNER JOIN sys.schemas AS s
ON o.schema_id = s.schema_id
INNER JOIN sys.indexes AS i1
ON T4.object_id = i1.object_id
AND T4.index_id = i1.index_id
INNER JOIN sys.indexes AS i2
ON T4.object_id = i2.object_id
AND T4.index_id_anomalie = i2.index_id
WHERE o."type" IN ('U', 'V')
ORDER BY NOM_TABLE, NOM_INDEX;

View File

@ -0,0 +1,27 @@
-- diagnostic des index fragmentés
SELECT * FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, NULL)
WHERE avg_fragmentation_in_percent > 10
AND page_count>64
-- ALTER INDEX .... REORGANIZE --> pas bloquant mais pas top
-- ALTER INDEX .... REBUILD --> bloquant mais top (peut être non bloquant si ed. Enterprise et mode ONLINE)
-- diagnostic des index inutilisés
-- !!!!!!!!!!!!!!!!!!!!!!!!! ATTENTION : ne pas utiliser si moins de 30 jours de fonctionnement CONTINU du SGBDR
SELECT sqlserver_start_time FROM sys.dm_os_sys_info
SELECT *
FROM sys.dm_db_index_usage_stats
WHERE index_id > 1
ORDER BY user_seeks, user_lookups, user_scans, user_updates DESC
-- recalcul des statistiques
SELECT *
FROM sys.stats AS s
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id)
WHERE modification_counter > rows / 10
UPDATE STATISTICS .... WITH FULLSCAN

View File

@ -0,0 +1,10 @@
DECLARE @LOGSPACE TABLE
(DATABASE_NAME sysname,
LOG_SIZE_MB FLOAT,
LOG_USE_PERCENT FLOAT,
STATUS INT);
INSERT INTO @LOGSPACE
EXEC ('DBCC SQLPERF(LOGSPACE);');
SELECT * FROM @LOGSPACE WHERE LOG_USE_PERCENT > 30;

View File

@ -0,0 +1,32 @@
USE [DB_GRAND_HOTEL];
GO
-- rajouter une partition :
-- 1) créer le stockage
ALTER DATABASE [DB_GRAND_HOTEL]
ADD FILEGROUP FG_PART_2009;
ALTER DATABASE [DB_GRAND_HOTEL]
ADD FILE (NAME = 'F_PART_2009',
FILENAME = 'H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_2009.ndf',
SIZE = 25,
FILEGROWTH = 10)
TO FILEGROUP FG_PART_2009;
-- 2) modifier le schema de paritionnement
ALTER PARTITION SCHEME PS_DATE_FACTURE
NEXT USED FG_PART_2009;
GO
-- 3) ajouter le "piquet" de partitionnement
ALTER PARTITION FUNCTION PF_DATE_FACTURE()
SPLIT RANGE ('2009-01-01');
-- autres possibilités
-- XXX) réaranger les partitions => ALTER PARTITION FUNCTION ... MERGE
-- XXX) supprimer les données de certainers partitions => TRUNCATE PARTITION

View File

@ -0,0 +1,32 @@
-- top 10 des requêtes les plus pourries
SELECT TOP 10
SUBSTRING(text,
(statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(text)
ELSE statement_end_offset
END - statement_start_offset)/2) + 1)
AS QUERY,
*
FROM sys.dm_exec_query_stats
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle);
GO
-- top 10 des procédures les plus pourries
SELECT *
FROM sys.dm_exec_procedure_stats
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle);
-- top 10 des déclencheurs les plus pourries
SELECT *
FROM sys.dm_exec_trigger_stats
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle);
-- top 10 des UDF les plus pourries
SELECT *
FROM sys.dm_exec_function_stats
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle);

View File

@ -0,0 +1,262 @@
/******************************************************************************
* PREPARATION
******************************************************************************/
USE master;
GO
IF EXISTS(SELECT * FROM sys.configurations WHERE name = 'xp_cmdshell' AND value_in_use = 0)
BEGIN
IF EXISTS(SELECT * FROM sys.configurations WHERE name = 'show advanced options' AND value_in_use = 0)
BEGIN
EXEC ('EXEC sp_configure ''show advanced options'', 1');
EXEC ('RECONFIGURE');
END
EXEC ('EXEC sp_configure ''xp_cmdshell'', 1');
EXEC ('RECONFIGURE');
END;
GO
EXEC xp_cmdshell 'MKDIR "C:\!\SQL Server Database AUDIT\"';
EXEC xp_cmdshell 'MKDIR "C:\!\SQL Server BACKUP\"';
IF EXISTS(SELECT * FROM sys.databases WHERE name = 'DB_TEST_AUDIT')
BEGIN
EXEC ('USE DB_TEST_AUDIT;ALTER DATABASE DB_TEST_AUDIT SET SINGLE_USER WITH ROLLBACK IMMEDIATE;')
EXEC ('USE master;DROP DATABASE DB_TEST_AUDIT');
END
GO
IF EXISTS(SELECT * FROM sys.server_principals WHERE name = 'CNX_LECTEUR')
EXEC ('DROP LOGIN CNX_LECTEUR');
GO
IF EXISTS(SELECT * FROM sys.server_principals WHERE name = 'CNX_ECRIVAIN')
EXEC ('DROP LOGIN CNX_ECRIVAIN');
GO
IF EXISTS(SELECT * FROM sys.server_audits WHERE name = 'SVA_FRED')
BEGIN
EXEC ('USE master;ALTER SERVER AUDIT SVA_FRED WITH (STATE = OFF);')
EXEC ('USE master;DROP SERVER AUDIT SVA_FRED');
END
GO
IF EXISTS(SELECT * FROM sys.server_audit_specifications WHERE name = 'SAS_BACKUP_RESTORE_SRV')
BEGIN
EXEC ('USE master;ALTER SERVER AUDIT SPECIFICATION SAS_BACKUP_RESTORE_SRV WITH (STATE = OFF);')
EXEC ('USE master;DROP SERVER AUDIT SPECIFICATION SAS_BACKUP_RESTORE_SRV');
END
GO
IF EXISTS(SELECT * FROM sys.server_triggers WHERE name = 'E_LOGON')
EXEC ('DROP TRIGGER E_LOGON ON ALL SERVER');
GO
EXEC xp_cmdshell 'DEL /Q "C:\!\SQL Server BACKUP\*.*"';
EXEC xp_cmdshell 'DEL /Q "C:\!\SQL Server Database AUDIT\*.*"';
GO
/******************************************************************************
* CRÉATION DE LA BASE DE TEST ET MISE EN PLACE D'OBJETS
******************************************************************************/
USE master;
GO
CREATE DATABASE DB_TEST_AUDIT;
GO
CREATE LOGIN CNX_LECTEUR
WITH PASSWORD = 'Maux 2 p@stAga',
DEFAULT_DATABASE = DB_TEST_AUDIT,
DEFAULT_LANGUAGE = French;
GO
CREATE LOGIN CNX_ECRIVAIN
WITH PASSWORD = 'Maux 2 p@stAga',
DEFAULT_DATABASE = DB_TEST_AUDIT,
DEFAULT_LANGUAGE = French;
GO
USE DB_TEST_AUDIT;
GO
CREATE USER USR_LECTEUR FROM LOGIN CNX_LECTEUR;
GO
CREATE USER USR_ECRIVAIN FROM LOGIN CNX_ECRIVAIN;
GO
CREATE SCHEMA S_BOUQUIN
GO
CREATE TABLE S_BOUQUIN.T_LIVRE_LVR
(LVR_ID INT IDENTITY PRIMARY KEY,
LVR_TITRE VARCHAR(256))
GO
CREATE TABLE dbo.acces
(id INT IDENTITY PRIMARY KEY,
nom sysname DEFAULT USER,
moment DATETIME2(3) DEFAULT GETDATE())
GO
GRANT SELECT ON DATABASE::DB_TEST_AUDIT TO USR_LECTEUR;
GO
GRANT INSERT ON dbo.acces TO USR_LECTEUR;
GO
GRANT SELECT, INSERT, UPDATE, DELETE ON SCHEMA::S_BOUQUIN TO USR_ECRIVAIN;
GO
GRANT INSERT ON dbo.acces TO USR_ECRIVAIN;
GO
CREATE TRIGGER E_LOGON
ON ALL SERVER
FOR LOGON
AS
IF EXISTS(SELECT *
FROM sys.server_principals
WHERE name = USER
AND default_database_name = 'DB_TEST_AUDIT')
INSERT INTO DB_TEST_AUDIT.dbo.acces
DEFAULT VALUES;
GO
/******************************************************************************
* PARTIE AUDIT : mise en place du suivi
******************************************************************************/
USE master;
GO
-- création de l'espace de travail pour les audits
CREATE SERVER AUDIT SVA_FRED
TO FILE ( FILEPATH = 'C:\!\SQL Server Database AUDIT\'
, MAXSIZE = 1 GB
, MAX_ROLLOVER_FILES = 256
, RESERVE_DISK_SPACE = OFF )
WITH ( QUEUE_DELAY = 3000
, ON_FAILURE = SHUTDOWN );
GO
-- suivie d'une action de groupe au niveau serveur :
CREATE SERVER AUDIT SPECIFICATION SAS_BACKUP_RESTORE_SRV
FOR SERVER AUDIT SVA_FRED
ADD (BACKUP_RESTORE_GROUP);
GO
-- concernant les sauvegardes et restaurations de toutes bases
USE DB_TEST_AUDIT;
GO
-- suivie d'une action de groupe au niveau base
CREATE DATABASE AUDIT SPECIFICATION SAS_SUIVI_BASE
FOR SERVER AUDIT SVA_FRED
ADD (DATABASE_OBJECT_CHANGE_GROUP)
GO
-- concernant les modification de structure des objets de la base
-- suivi d'une action particulière au niveau objet, pour toute la base
ALTER DATABASE AUDIT SPECIFICATION SAS_SUIVI_BASE
ADD ( SELECT
ON DATABASE::DB_TEST_AUDIT
BY dbo);
-- concerne le SELECT sur tous les objets de la base DB_TEST_AUDIT
-- suivie de plusieurs action spécifique au niveau objet, pour tout un schema SQL
ALTER DATABASE AUDIT SPECIFICATION SAS_SUIVI_BASE
ADD ( INSERT, UPDATE, DELETE
ON SCHEMA::S_BOUQUIN
BY USR_ECRIVAIN);
-- concerne les mises à jour (INSERT, UPDATE, DELETE) sur tous les objets ddu schéma SQL S_BOUQUIN
-- suivi de l'action INSERT sur la table dbo.acces.
ALTER DATABASE AUDIT SPECIFICATION SAS_SUIVI_BASE
ADD ( INSERT
ON dbo.acces
BY USR_ECRIVAIN, USR_LECTEUR);
GO
-- démarrage de l'audit au niveau serveur
USE master;
GO
-- pour l'espace de stockage
ALTER SERVER AUDIT SVA_FRED
WITH (STATE = ON);
GO
-- pour le suivi niveau serveur
ALTER SERVER AUDIT SPECIFICATION SAS_BACKUP_RESTORE_SRV
WITH (STATE = ON);
GO
-- démarrage de l'audit au niveau base
USE DB_TEST_AUDIT;
GO
ALTER DATABASE AUDIT SPECIFICATION SAS_SUIVI_BASE
WITH (STATE = ON);
GO
/******************************************************************************
* TEST divers de manipulation des données
******************************************************************************/
ALTER TABLE S_BOUQUIN.T_LIVRE_LVR
ADD LVR_DH_INSERT DATETIME2(6) DEFAULT GETDATE();
--> se connecter sous le compte CNX_ECRIVAIN/Maux 2 p@stAga
INSERT INTO S_BOUQUIN.T_LIVRE_LVR (LVR_TITRE)
VALUES ('Guère épais'),
('Benne à or dur'),
('L''étroite Moustiquaire');
GO 100
UPDATE TOP (50) S_BOUQUIN.T_LIVRE_LVR
SET LVR_TITRE = 'Guerre et Paix'
WHERE LVR_TITRE = 'Guère épais';
GO
UPDATE TOP (10) S_BOUQUIN.T_LIVRE_LVR
SET LVR_TITRE = 'Les trois mousquetaires'
WHERE LVR_TITRE = 'L''étroite Moustiquaire';
GO
DELETE FROM S_BOUQUIN.T_LIVRE_LVR
WHERE LVR_TITRE = 'Benne à or dur';
GO
SELECT *
FROM S_BOUQUIN.T_LIVRE_LVR ;
GO
--> se connecter sous le compte CNX_LECTEUR/Maux 2 p@stAga
SELECT LVR_ID, LVR_TITRE
FROM S_BOUQUIN.T_LIVRE_LVR
WHERE LVR_TITRE LIKE '% et %' ;
GO
-- > revenir en tant que sysadmin
BACKUP DATABASE DB_TEST_AUDIT
TO DISK = 'C:\!\SQL Server BACKUP\DB_TEST_AUDIT.full.bak';
/******************************************************************************
* PARTIE AUDIT : lecture des données auditées
******************************************************************************/
SELECT *
FROM sys.fn_get_audit_file ( 'C:\!\SQL Server Database AUDIT\*', default, default )

View File

View File

@ -0,0 +1,11 @@
-- Exercice sur le stokage de la base "hotel"
-- réorganisez le stockage de la base DB_GRAND_HOTEL comme suit :
1) création d'un groupe de fichier FG_DATA pour les données avec 2 fichiers de 100 Mo
incrément de croissance de 64 Mo. Aucune de limite de taille
2) faite de ce groupe de fichiers le groupe de fichier par défaut
3) déplacer les tables et index dans ce groupe de fichier à l'aide de la requête donnée
4) redimensionnez le journal de transactions à 100 Mo et le fichier master data à 10 Mo
5) précisez que la taille d'incrément du JT est de 64 Mo et pour le fichier mdf de 10 Mo

View File

@ -0,0 +1,65 @@
-- créer trois comptes de connexion :
-- CNX_LECTEUR
-- CNX_ECRIVAIN
-- CNX_ADMIN
-- langage français, base par défaut DB_GRAND_HOTEL
-- mot de passe => 'SQL2019x'
USE master;
GO
CREATE LOGIN CNX_LECTEUR
WITH PASSWORD = 'SQL2019x',
DEFAULT_LANGUAGE = Français,
DEFAULT_DATABASE = DB_GRAND_HOTEL;
CREATE LOGIN CNX_ECRIVAIN
WITH PASSWORD = 'SQL2019x',
DEFAULT_LANGUAGE = Français,
DEFAULT_DATABASE = DB_GRAND_HOTEL;
CREATE LOGIN CNX_ADMIN
WITH PASSWORD = 'SQL2019x',
DEFAULT_LANGUAGE = Français,
DEFAULT_DATABASE = DB_GRAND_HOTEL;
GO
--> TESTEZ si vous arrivez à vous connecter !
USE [DB_GRAND_HOTEL]
GO
-- créer 3 utilisateurs SQL relatif à ces comptes de connexion
-- USR_LECTEUR
-- USR_ECRIVAIN
-- USR_ADMIN
CREATE USER USR_LECTEUR
FOR LOGIN CNX_LECTEUR;
CREATE USER USR_ECRIVAIN
FOR LOGIN CNX_ECRIVAIN;
CREATE USER USR_ADMIN
FOR LOGIN CNX_ADMIN;
-- USR_REPORT qui se connecte directement avec le mot passe 'SQL2019report', langage français
IF EXISTS(SELECT *
FROM sys.configurations
WHERE name = 'contained database authentication'
AND value_in_use = 0)
BEGIN
EXEC ('EXEC sp_configure ''contained database authentication'', 1;')
EXEC ('RECONFIGUE;')
END;
GO
USE [master]
GO
ALTER DATABASE [DB_GRAND_HOTEL] SET CONTAINMENT = PARTIAL WITH NO_WAIT
GO
USE [DB_GRAND_HOTEL]
GO
CREATE USER USR_REPORT
WITH PASSWORD = 'SQL2019report';
GO

View File

@ -0,0 +1,14 @@
1) sauvegardez la base DB_GRAND_HOTEL
--> BACKUP DATABASE DB_GRAND_HOTEL
TO DISK = 'C:\SQL\BACKUP\DB_GRAND_HOTEL.BAK'
WITH COMPRESSION;
2) mettez en place TDE
3) faite une nouvelle sauvegarde
--> BACKUP DATABASE DB_GRAND_HOTEL
TO DISK = 'C:\SQL\BACKUP\DB_GRAND_HOTEL_TDE.BAK'
WITH COMPRESSION;
--> allez comparer le contenu de ces deux fichiers à l'aide d'un éditeur texte

View File

@ -0,0 +1,34 @@
-- sauvegardez en mode complet les bases : master, msdb, DB_GRAND_HOTEL
BACKUP DATABASE master
TO DISK = 'master.bak'
WITH COMPRESSION, RETAINDAYS = 3;
BACKUP DATABASE msdb
TO DISK = 'msdb.bak'
WITH COMPRESSION, RETAINDAYS = 3;
BACKUP DATABASE DB_GRAND_HOTEL
TO DISK = 'DB_GRAND_HOTEL.bak'
WITH COMPRESSION, RETAINDAYS = 3;
-- planifié à 22h tous les jours
-- sauvegarder le journal de transaction de la base DB_GRAND_HOTEL
BACKUP LOG DB_GRAND_HOTEL
TO DISK = 'DB_GRAND_HOTEL.trn'
WITH RETAINDAYS = 3;
-- planifié toutes les 20 minutes
RAISERROR('Mon erreur', 16, 1) WITH LOG;
EXECUTE AS USER = 'USR_ECRIVAIN'
RAISERROR('Mon erreur', 16, 1) WITH LOG;
-- IDERA, QUEST, APEX, REDGATE, SOLAR WINDS, SENTRY SQL
-- KUKANRU gratuit et léger

View File

@ -0,0 +1,130 @@
/***************************
****************************
* LES REQUÊTES A OPTIMISER *
****************************
***************************/
/*
SET STATISTICS IO OFF
SET STATISTICS TIME ON
--> 32867 IO
DROP INDEX X ON T_EMPLOYEE_EMP;
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SEXE); --> 32867
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SERVICE); --> 32867
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SEXE) INCLUDE (EMP_SERVICE); --> 4535
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SEXE, EMP_SERVICE); --> 82
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SERVICE, EMP_SEXE); --> 77
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SERVICE) INCLUDE (EMP_SEXE); --> 77
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SEXE) WHERE EMP_SERVICE = 'RH';--> 54
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SEXE) WHERE EMP_SERVICE = 'RH' WITH (DATA_COMPRESSION = ROW); --> 48
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SEXE) WHERE EMP_SERVICE = 'RH' WITH (DATA_COMPRESSION = PAGE); --> 31
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SEXE) INCLUDE (EMP_SERVICE) WITH (DATA_COMPRESSION = PAGE);--> 1410
CREATE INDEX X ON T_EMPLOYEE_EMP (EMP_SERVICE) INCLUDE (EMP_SEXE) WITH (DATA_COMPRESSION = PAGE); --> 36 (UC = 15, TE = 20)
DROP INDEX XC ON T_EMPLOYEE_EMP
CREATE COLUMNSTORE INDEX XC ON T_EMPLOYEE_EMP (EMP_SERVICE, EMP_SEXE); --> 3 segments (UC = 500, TE = 300)
(UC = 0, TE = 4 ms
-- vue indexée
CREATE VIEW V_EMP_SEXE_SERVICE
WITH SCHEMABINDING
AS
SELECT EMP_SERVICE, EMP_SEXE, COUNT_BIG(*) AS NOMBRE
FROM [dbo].[T_EMPLOYEE_EMP]
GROUP BY EMP_SERVICE, EMP_SEXE;
GO
CREATE UNIQUE CLUSTERED INDEX XV ON V_EMP_SEXE_SERVICE (EMP_SERVICE, EMP_SEXE);
GO
Si version Standard alors utiliser le tag NOEXPAND et la vue
*/
-- 1
SELECT COUNT(*), EMP_SEXE
FROM T_EMPLOYEE_EMP
WHERE EMP_SERVICE = 'RH'
GROUP BY EMP_SEXE;
-- 2
SELECT COUNT(*) AS NOMBRE, 'Homme' AS SEXE
FROM T_EMPLOYEE_EMP
WHERE EMP_SERVICE = 'RH'
AND EMP_SEXE = 'Homme'
UNION
SELECT COUNT(*) AS NOMBRE, 'Femme' AS SEXE
FROM T_EMPLOYEE_EMP
WHERE EMP_SERVICE = 'RH'
AND EMP_SEXE = 'Femme';
GO
-- 3
SELECT COUNT(*) AS HOMME,
(SELECT COUNT(*)
FROM T_EMPLOYEE_EMP
WHERE EMP_SERVICE = 'RH'
AND EMP_SEXE = 'Femme') AS FEMME
FROM T_EMPLOYEE_EMP
WHERE EMP_SERVICE = 'RH'
AND EMP_SEXE = 'Homme';
GO
-- 4
SELECT COUNT(*) - (SELECT COUNT(*)
FROM T_EMPLOYEE_EMP E2
WHERE E2.EMP_SERVICE = E1.EMP_SERVICE
AND EMP_SEXE = 'Femme') AS HOMME,
(SELECT COUNT(*)
FROM T_EMPLOYEE_EMP E3
WHERE E3.EMP_SERVICE = E1.EMP_SERVICE
AND EMP_SEXE = 'Femme') AS FEMME
FROM T_EMPLOYEE_EMP E1
WHERE EMP_SERVICE = 'RH'
GROUP BY EMP_SERVICE;
GO
-- 5
SELECT SUM(CASE EMP_SEXE
WHEN 'Homme' THEN 1
WHEN 'Femme' THEN 0
END) AS NOMBRE_HOMME,
SUM(CASE EMP_SEXE
WHEN 'Homme' THEN 0
WHEN 'Femme' THEN 1
END) AS NOMBRE_FEMME
FROM dbo.T_EMPLOYEE_EMP
WHERE EMP_SERVICE= 'RH';
GO
-- 6
SELECT COUNT(EMP_SEXE) AS NOMBRE,
CASE EMP_SEXE
WHEN 'Femme' THEN 'Femme'
WHEN 'Homme' THEN 'Homme'
ELSE 'Unknown'
END AS SEXE
FROM dbo.T_EMPLOYEE_EMP
WHERE EMP_SERVICE= 'RH'
GROUP BY EMP_SEXE;
GO
-- 7
SELECT COUNT(*) AS Nombre, 'Femme' AS Sexe
FROM dbo.T_EMPLOYEE_EMP
WHERE EMP_ID NOT IN (SELECT EMP_ID
FROM dbo.T_EMPLOYEE_EMP
WHERE EMP_SERVICE <> 'RH'
OR EMP_SEXE = 'Homme')
UNION ALL
SELECT COUNT(*) AS Nombre, 'Homme' AS Sexe
FROM dbo.T_EMPLOYEE_EMP
WHERE EMP_ID NOT IN (SELECT EMP_ID
FROM dbo.T_EMPLOYEE_EMP
WHERE EMP_SERVICE <> 'RH'
OR EMP_SEXE = 'Femme');
GO

View File

@ -0,0 +1,28 @@
USE DB_GRAND_HOTEL;
GO
SELECT name, physical_name
FROM sys.database_files
WHERE type_desc = 'rows';
GO
/*
name physical_name
DB_GRAND_HOTEL H:\DATABASE_SQL\SQL2019FBIN2\DATA\DB_GRAND_HOTEL.mdf
F_DATA_1 H:\DATABASE_SQL\SQL2019FBIN2\DATA\F_DATA_1.ndf
F_DATA_2 H:\DATABASE_SQL\SQL2019FBIN2\DATA\F_DATA_2.ndf
F_PART_OLD H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_OLD.ndf
F_PART_2006 H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_2006.ndf
F_PART_2007 H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_2007.ndf
F_PART_2008 H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_2008.ndf
F_PART_2009 H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_2009.ndf
*/
-- 1) mettre la base de données "OFF line" (NOTER l'heure à la quelle la commande est passée !)
-- 2) supprimer un des fichiers de données
-- 3) tentez de mettre la base "ON line"... que va t-il se passer (regarder le journal d'événement)
-- 4) sauvegarder à l'aide d'une commande SQL (ne pas passer par l'IHM) la queue du journal de transaction
-- 5) supprimer la base
-- 6) utiliser l'assistant de restauration pour restaurer l'intégralité de la base
-- 7) consatez si vous avez perdu ou non des données en comparant le dernier INSERT dans la table HORLOGE et l'heure notée

View File

@ -0,0 +1,24 @@
-- création des espaces de stockage
ALTER DATABASE ... ADD FILEGROUP ...
ALTER DATABASE ... ADD FILE ( ... ) TO FILEGROUP ...
-- 1) création de la fonction de partitionnement
CREATE PARTITION FUNCTION PF_DATE_FACTURE (DATETIME2(0))
AS RANGE RIGHT
FOR VALUES ('2018-01-01', '2019-01-01', '2020-01-01');
-- 2) création du schéma de répartition
CREATE PARTITION SCHEME PS_DATE_FACTURE
AS PARTITION PF_DATE_FACTURE
TO (FG_OLD, FG_2018, FG_2019, FG_2020);
-- 3) création de l'objet sur la partition
CREATE TABLE ( ... )
ON PS_DATE_FACTURE (colonne_critère)
CREATE INDEX ( ... ) --> la première colonne de la clef d'index doit être la colonne critère
ON PS_DATE_FACTURE (colonne_critère)

View File

@ -0,0 +1,63 @@
-- le compte de connexion CNX_ADMIN, doit être doté des privilèges suivants :
USE master;
GO
GRANT VIEW SERVER STATE,
ADMINISTER BULK OPERATIONS,
CREATE ANY DATABASE
-- ON SERVER::[HPZ840FB\SQL2019FBIN2]
TO CNX_ADMIN;
GO
USE DB_GRAND_HOTEL;
GO
-- l'utilisateur USR_LECTEUR, doit être doté des privilèges suivants :
--> SELECT sur l'intégralité de la base
GRANT SELECT
-- ON DATABASE::DB_GRAND_HOTEL
TO USR_LECTEUR;
-- l'utilisateur USR_ECRIVAIN, doit être doté des privilèges suivants :
-->SELECT, INSERT, UPDATE, DELETE sur l'intégralité de la base
--> mais aussi EXECUTE sur la base, sauf sur le schema dbo.
GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE
-- ON DATABASE::DB_GRAND_HOTEL
TO USR_ECRIVAIN;
DENY EXECUTE
ON SCHEMA::dbo
TO USR_ECRIVAIN;
-- l'utilisateur USR_ADMIN, doit avoir tous les privilèges sur la base
GRANT CONTROL
-- ON DATABASE::DB_GRAND_HOTEL
TO USR_ADMIN;
-- l'utilisateur USR_REPORT, doit être doté des privilèges de lecture sur la base sauf schéma dbo.
GRANT SELECT
-- ON DATABASE::DB_GRAND_HOTEL
TO USR_REPORT;
DENY SELECT
ON SCHEMA::dbo
TO USR_REPORT;
/*
GRANT <liste de privilèges>
ON <objet à privilégier>
TO <liste des entités bénéficiaires>
<liste de privilèges> ::=
<nom_privilège> [, <nom_privilège2> [, <nom_privilège3> [, ... ] ]
<nom_prilège> ::= identifiant_SQL
<objet à privilégier> ::=
{ <nom_objet> | <classe_conteneur>::<nom_conteneur> }
*/

View File

@ -0,0 +1,27 @@
USE master;
GO
CREATE MASTER KEY ENCRYPTION
BY PASSWORD = 'Vive la Covid-19 !';
GO
BACKUP MASTER KEY TO FILE = 'H:\DATABASE_SQL\BACKUPS\SQL2019FBIN2\MK.bkp'
ENCRYPTION BY PASSWORD = 'Vive la Covid-19 !'
CREATE CERTIFICATE CRT_FOR_TDE
WITH SUBJECT = 'Certificat pour le chiffrement TDE',
EXPIRY_DATE = '2022-01-01' ;
GO
USE [DB_GRAND_HOTEL];
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_128
ENCRYPTION BY SERVER CERTIFICATE CRT_FOR_TDE;
GO
ALTER DATABASE [DB_GRAND_HOTEL]
SET ENCRYPTION ON;
GO

View File

@ -0,0 +1,103 @@
WITH T AS
(
SELECT CONNEXION.name AS LOGIN_NAME,
GRANTEE.default_schema_name AS DEFAULT_SCHEMA,
PRIVILEGE.state_desc AS SQL_ORDER,
GRANTOR.name AS GRANTOR,
GRANTEE.name AS GRANTEE,
PRIVILEGE."permission_name" AS PRIVILEGE,
s.name AS OBJECT_SCHEMA,
o.name AS OBJECT_NAME,
LTRIM(STUFF((SELECT ', ' + name
FROM sys.columns AS c
WHERE PRIVILEGE.major_id = c.object_id
AND PRIVILEGE.minor_id = c.column_id
FOR XML PATH('')), 1, 1, '' )) AS COLUMN_LIST,
PRIVILEGE.class_desc AS OBJECT_CLASS,
CASE PRIVILEGE.class
WHEN 0 THEN DB_NAME()
WHEN 1 THEN o.type_desc
WHEN 3 THEN ss.name COLLATE database_default
WHEN 4 THEN dbp.name
WHEN 5 THEN asb.name
WHEN 6 THEN typ.name
WHEN 10 THEN xsc.name
END AS OBJECT_TYPE_OR_NAME
FROM sys.database_principals AS GRANTEE
LEFT OUTER JOIN sys.server_principals AS CONNEXION
ON GRANTEE.sid = CONNEXION.sid
LEFT OUTER JOIN sys.database_permissions AS PRIVILEGE
ON GRANTEE.principal_id = PRIVILEGE.grantee_principal_id
LEFT OUTER JOIN sys.database_principals AS GRANTOR
ON PRIVILEGE.grantor_principal_id = GRANTOR.principal_id
-- lien avec les objets primaires
LEFT OUTER JOIN sys.objects AS o
ON PRIVILEGE.major_id = o.object_id AND PRIVILEGE.class = 1
LEFT OUTER JOIN sys.schemas AS s
ON o.schema_id = s.schema_id
-- lien avec les schémas
LEFT OUTER JOIN sys.schemas AS ss
ON PRIVILEGE.major_id = ss.schema_id
AND minor_id = 0 AND PRIVILEGE.class = 3
-- lien avec les "principals" de la base de données
LEFT OUTER JOIN sys.database_principals AS dbp
ON PRIVILEGE.major_id = dbp.principal_id
AND minor_id = 0 AND PRIVILEGE.class = 4
-- lien avec les "assembly"
LEFT OUTER JOIN sys.assemblies AS asb
ON PRIVILEGE.major_id = asb.assembly_id
AND minor_id = 0 AND PRIVILEGE.class = 5
-- lien avec les "type" 6 =
LEFT OUTER JOIN sys.types AS typ
ON PRIVILEGE.major_id = typ.user_type_id
AND minor_id = 0 AND PRIVILEGE.class = 6
-- lien avec les collections de schémas XML
LEFT OUTER JOIN sys.xml_schema_collections AS xsc
ON PRIVILEGE.major_id = xsc.xml_collection_id
AND minor_id = 0 AND PRIVILEGE.class = 10
-- lien avec les types de message
LEFT OUTER JOIN sys.service_message_types AS smt
ON PRIVILEGE.major_id = smt.message_type_id
AND minor_id = 0 AND PRIVILEGE.class = 15
-- lien avec les contrats de service
LEFT OUTER JOIN sys.service_contracts AS sc
ON PRIVILEGE.major_id = sc.service_contract_id
AND minor_id = 0 AND PRIVILEGE.class = 16
-- lien avec les services
LEFT OUTER JOIN sys.services AS srv
ON PRIVILEGE.major_id = srv.service_id
AND minor_id = 0 AND PRIVILEGE.class = 17
-- lien avec les liaisons de service distant
LEFT OUTER JOIN sys.remote_service_bindings AS rsb
ON PRIVILEGE.major_id = rsb.remote_service_binding_id
AND minor_id = 0 AND PRIVILEGE.class = 18
-- lien avec les 19 = Itinéraire
LEFT OUTER JOIN sys.routes AS r
ON PRIVILEGE.major_id = r.route_id
AND minor_id = 0 AND PRIVILEGE.class = 19
-- lien avec les cataloguec de texte intégral
LEFT OUTER JOIN sys.fulltext_catalogs AS ftc
ON PRIVILEGE.major_id = ftc.fulltext_catalog_id
AND minor_id = 0 AND PRIVILEGE.class = 23
-- lien avec les clés symétriques
LEFT OUTER JOIN sys.symmetric_keys AS sk
ON PRIVILEGE.major_id = sk.symmetric_key_id
AND minor_id = 0 AND PRIVILEGE.class = 24
-- lien avec les certificats
LEFT OUTER JOIN sys.certificates AS ctf
ON PRIVILEGE.major_id = ctf.certificate_id
AND minor_id = 0 AND PRIVILEGE.class = 25
-- lien avec les clés asymétriques
LEFT OUTER JOIN sys.asymmetric_keys AS ask
ON PRIVILEGE.major_id = ask.asymmetric_key_id
AND minor_id = 0 AND PRIVILEGE.class = 26
WHERE GRANTEE.type = 'S' --> SQL_USER
)
SELECT COALESCE (N'EXECUTE AS USER = '''+ GRANTOR + N'''; ' +
SQL_ORDER + N' ' + PRIVILEGE + N' ON ' +
COALESCE('[' + OBJECT_SCHEMA + N'].[' + OBJECT_NAME +'] ' COLLATE French_CI_AI +
COALESCE(N'(' + COLUMN_LIST + N')' COLLATE French_CI_AI, ''),
OBJECT_CLASS + N'::' + OBJECT_TYPE_OR_NAME COLLATE French_CI_AI) +
N' TO ' + GRANTEE +'; REVERT;' COLLATE French_CI_AI, '') AS SQL_COMMAND,
*
FROM T;

View File

@ -0,0 +1,134 @@
USE master;
GO
CREATE PROCEDURE dbo.sp__ADMIN_RECALC_STATS
AS
/******************************************************************************
* NATURE : PROCEDURE *
* OBJECT : master.dbo.sp__ADMIN_RECALC_STAT *
* CREATE : 2020-06-26 *
* VERSION : 1 *
* SYSTEM : OUI *
*******************************************************************************
* Frédéric BROUARD - alias SQLpro - SARL SQL SPOT - SQLpro@sqlspot.com *
* Architecte de données : expertise, audit, conseil, formation, modélisation *
* tuning, sur les SGBD Relationnels, le langage SQL, MS SQL Server/PostGreSQL *
* blog: http://blog.developpez.com/sqlpro site: http://sqlpro.developpez.com *
*******************************************************************************
* PURPOSE : recalcul des statistiques d'une base *
*******************************************************************************
* INPUTS : *
* néant *
*******************************************************************************
* EXEMPLE : *
* USE maBase; *
* EXEC sp__ADMIN_RECALC_STAT *
******************************************************************************/
SET NOCOUNT ON;
DECLARE @SQL NVARCHAR(max) = N''
SELECT @SQL = @SQL + N'UPDATE STATISTICS [' + s.name + '].['
+ o.name + '] (['
+ st.name + ']) WITH FULLSCAN;'
FROM sys.stats AS st
INNER JOIN sys.objects AS o ON st.object_id = o.object_id
INNER JOIN sys.schemas AS s ON o.schema_id = s.schema_id
CROSS APPLY sys.dm_db_stats_properties(st.object_id, st.stats_id)
WHERE modification_counter > 0
AND 100.0 * modification_counter / rows >
CASE
WHEN 10 + (13.8 - LOG(rows)) / 2 < 0.5
THEN 0.5
ELSE 10 + (13.8 - LOG(rows)) / 2
END
/* VARIANTE 1 du WHERE
WHERE modification_counter > 0
AND 100.0 * modification_counter / rows <
CASE WHEN rows + modification_counter < 100000
THEN 10
ELSE LOG10(modification_counter + rows)
/ (1 + (LOG10(modification_counter + rows) - LOG10(100000))) END
*/
/* VARIANTE 2 du WHERE
WHERE 1 = CASE WHEN COALESCE(CAST(modification_counter AS REAL) / rows, 1)
> 0.3
THEN 1
WHEN rows < 100000 AND
COALESCE(CAST(modification_counter AS REAL) / rows, 1)
> 0.1
THEN 1
WHEN rows > 300000000 AND
COALESCE(CAST(modification_counter AS REAL) / rows, 1)
> 0.0048
THEN 1
WHEN rows BETWEEN 100000 AND 300000000 AND
COALESCE(CAST(modification_counter AS REAL) / rows, 1)
> (20 - LOG(rows))
THEN 1
ELSE 0
END;
*/
EXEC (@SQL);
GO
EXEC sp_MS_marksystemobject 'dbo.sp__ADMIN_RECALC_STATS';
GO
CREATE PROCEDURE dbo.sp__ADMIN_DEFRAG_INDEX
AS
/******************************************************************************
* NATURE : PROCEDURE *
* OBJECT : master.dbo.sp__ADMIN_DEFRAG_INDEX *
* CREATE : 2020-06-26 *
* VERSION : 1 *
* SYSTEM : OUI *
*******************************************************************************
* Frédéric BROUARD - alias SQLpro - SARL SQL SPOT - SQLpro@sqlspot.com *
* Architecte de données : expertise, audit, conseil, formation, modélisation *
* tuning, sur les SGBD Relationnels, le langage SQL, MS SQL Server/PostGreSQL *
* blog: http://blog.developpez.com/sqlpro site: http://sqlpro.developpez.com *
*******************************************************************************
* PURPOSE : défragmentation des idex d'une base *
*******************************************************************************
* INPUTS : *
* néant *
*******************************************************************************
* EXEMPLE : *
* USE maBase; *
* EXEC sp__ADMIN_DEFRAG_INDEX *
*******************************************************************************
* IMPROVE : *
* version Enterprise *
* ALTER INDEX ... REBUILD WITH (ONLINE = ON) *
******************************************************************************/
SET NOCOUNT ON;
DECLARE @SQL NVARCHAR(max) = N''
SELECT @SQL = @SQL +
CASE WHEN i.name IS NULL
THEN N'ALTER TABLE [' + s.name + '].[' + o.name + '] REBUILD;'
WHEN avg_fragmentation_in_percent > 30
THEN N'ALTER INDEX [' + i.name + '] ON [' + s.name + '].[' + o.name + '] REBUILD;'
ELSE N'ALTER INDEX [' + i.name + '] ON [' + s.name + '].[' + o.name + '] REORGANIZE;'
END
FROM sys.dm_db_index_physical_stats(DB_NAME(), NULL, NULL, NULL, NULL) AS ips
INNER JOIN sys.objects AS o ON ips.object_id = o.object_id
INNER JOIN sys.schemas AS s ON o.schema_id = s.schema_id
INNER JOIN sys.indexes AS i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE page_count > 64
AND avg_fragmentation_in_percent > 10
AND ips.index_id < 1000;
EXEC (@SQL);
GO
EXEC sp_MS_marksystemobject 'sp__ADMIN_DEFRAG_INDEX';
GO
--> à mettre dans un travail de l'AGENT SQL 1 fois par jour aux heures creuses
DECLARE @SQL NVARCHAR(max) = N'';
SELECT @SQL = @SQL + 'USE [' + name + '];EXEC sp__ADMIN_DEFRAG_INDEX;EXEC sp__ADMIN_RECALC_STATS;'
FROM sys.databases
WHERE name NOT IN ('model', 'tempdb')
AND state = 0
AND source_database_id IS NULL;
EXEC (@SQL);

View File

@ -0,0 +1,48 @@
SELECT s.name AS TABLE_SCHEMA,
o.name AS TABLE_NAME,
i.name AS INDEX_NAME,
f.name AS PARTITION_FUNCTION,
ps.name AS PARTITION_SCHEMA,
p.partition_number AS PART_NUM,
fg.name AS FILE_GROUP,
rows AS ROW_COUNT,
SUM(dbf.size) OVER(PARTITION BY fg.name) AS PAGE_COUNT,
au.total_pages AS USED_PAGES,
CASE boundary_value_on_right
WHEN 1
THEN 'RIGHT'
ELSE 'LEFT'
END AS RANGE,
rv1.value AS LOW_VALUE,
rv2.value AS HIGH_VALUE
FROM sys.partitions p
JOIN sys.indexes i
ON p.object_id = i.object_id
AND p.index_id = i.index_id
JOIN sys.objects AS o
ON i.object_id = o.object_id
JOIN sys.schemas AS s
ON o.schema_id = s.schema_id
JOIN sys.partition_schemes ps
ON ps.data_space_id = i.data_space_id
JOIN sys.partition_functions f
ON f.function_id = ps.function_id
JOIN sys.destination_data_spaces dds
ON dds.partition_scheme_id = ps.data_space_id
AND dds.destination_id = p.partition_number
JOIN sys.filegroups fg
ON dds.data_space_id = fg.data_space_id
JOIN sys.database_files AS dbf
ON dbf.data_space_id = fg.data_space_id
JOIN sys.allocation_units au
ON au.container_id = p.partition_id
LEFT OUTER JOIN sys.partition_range_values rv2
ON f.function_id = rv2.function_id
AND p.partition_number = rv2.boundary_id
LEFT OUTER JOIN sys.partition_range_values rv1
ON f.function_id = rv1.function_id
AND p.partition_number - 1 = rv1.boundary_id
ORDER BY TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
LOW_VALUE;

View File

@ -0,0 +1,30 @@
-- métadonnées des bases et fichiers de toute sles bases
SELECT *
FROM sys.databases;
SELECT SUM(size) * 8 / SQUARE(1024) AS SIZE_GB,
type_desc,
SUM(SUM(size) * 8 / SQUARE(1024)) OVER() AS TOTAL_SIZE_GB
FROM sys.master_files
GROUP BY type_desc
ORDER BY type_desc;
SELECT SUM(size) * 8 / SQUARE(1024) AS SIZE_GB,
db.name,
SUM(SUM(size) * 8 / SQUARE(1024)) OVER() AS TOTAL_SIZE_GB
FROM sys.master_files AS mf
JOIN sys.databases AS db
ON mf.database_id = db.database_id
GROUP BY db.name
ORDER BY SIZE_GB DESC;
SELECT SUM(size) * 8 / SQUARE(1024) AS SIZE_GB,
db.name, type_desc,
SUM(SUM(size) * 8 / SQUARE(1024)) OVER() AS TOTAL_SIZE_GB
FROM sys.master_files AS mf
JOIN sys.databases AS db
ON mf.database_id = db.database_id
GROUP BY db.name, type_desc
ORDER BY type_desc, SIZE_GB DESC;

View File

@ -0,0 +1,48 @@
SELECT s.name AS TABLE_SCHEMA,
o.name AS TABLE_NAME,
i.name AS INDEX_NAME,
f.name AS PARTITION_FUNCTION,
ps.name AS PARTITION_SCHEMA,
p.partition_number AS PART_NUM,
fg.name AS FILE_GROUP,
rows AS ROW_COUNT,
SUM(dbf.size) OVER(PARTITION BY fg.name) AS PAGE_COUNT,
au.total_pages AS USED_PAGES,
CASE boundary_value_on_right
WHEN 1
THEN 'RIGHT'
ELSE 'LEFT'
END AS RANGE,
rv1.value AS LOW_VALUE,
rv2.value AS HIGH_VALUE
FROM sys.partitions p
JOIN sys.indexes i
ON p.object_id = i.object_id
AND p.index_id = i.index_id
JOIN sys.objects AS o
ON i.object_id = o.object_id
JOIN sys.schemas AS s
ON o.schema_id = s.schema_id
JOIN sys.partition_schemes ps
ON ps.data_space_id = i.data_space_id
JOIN sys.partition_functions f
ON f.function_id = ps.function_id
JOIN sys.destination_data_spaces dds
ON dds.partition_scheme_id = ps.data_space_id
AND dds.destination_id = p.partition_number
JOIN sys.filegroups fg
ON dds.data_space_id = fg.data_space_id
JOIN sys.database_files AS dbf
ON dbf.data_space_id = fg.data_space_id
JOIN sys.allocation_units au
ON au.container_id = p.partition_id
LEFT OUTER JOIN sys.partition_range_values rv2
ON f.function_id = rv2.function_id
AND p.partition_number = rv2.boundary_id
LEFT OUTER JOIN sys.partition_range_values rv1
ON f.function_id = rv1.function_id
AND p.partition_number - 1 = rv1.boundary_id
ORDER BY TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
LOW_VALUE;

View File

@ -0,0 +1,18 @@
SELECT *
FROM sys.server_principals;
SELECT *
FROM sys.server_role_members;
SELECT usr.name AS UTILISATEUR, 'MEMBRE DU ROLE', rol.name AS ROLE
FROM sys.server_role_members AS srm
JOIN sys.server_principals AS usr
ON srm.member_principal_id = usr.principal_id
JOIN sys.server_principals AS rol
ON srm.role_principal_id = rol.principal_id;
SELECT p.*, ep.name
FROM sys.server_permissions AS p
LEFT OUTER JOIN sys.endpoints AS ep
ON p.major_id = ep.endpoint_id AND p.class_desc = 'ENDPOINT'

View File

@ -0,0 +1,74 @@
USE master;
GO
CREATE PROCEDURE dbo.sp__METRIQUE_STOCKAGE @REAJUSTE BIT = 0
AS
SET NOCOUNT ON;
IF @REAJUSTE = 1
--> réajustement des statistiques des espaces de stockage
DBCC UPDATEUSAGE (0);
--> volume des transactions
DECLARE @T TABLE (database_name sysname, log_size_mb FLOAT, log_space_used_percent FLOAT, STATUS bit);
DECLARE @TRANSACTIONS_RESERVEES_MO BIGINT,
@TRANSACTIONS_UTILISEES_MO BIGINT,
@TRANSACTIONS_UTILISEES_POURCENT DECIMAL(5,2);
INSERT INTO @T
EXEC ('DBCC SQLPERF(LOGSPACE)')
SELECT @TRANSACTIONS_RESERVEES_MO = ROUND(log_size_mb, 0),
@TRANSACTIONS_UTILISEES_MO = ROUND(log_size_mb * log_space_used_percent / 100.0, 0),
@TRANSACTIONS_UTILISEES_POURCENT = CAST(log_space_used_percent AS DECIMAL(5,2))
FROM @T WHERE database_name = DB_NAME();
-- taille de l'enveloppe de stockage :
WITH
T_FILES AS (
SELECT CAST(ROUND(SUM(CASE WHEN "type" = 1
THEN SIZE
ELSE 0
END) / 128.0, 0) AS BIGINT) AS TRANSACTIONS_RESERVEES_MO,
CAST(ROUND(SUM(CASE WHEN "type" != 1
THEN SIZE
ELSE 0
END) / 128.0, 0) AS BIGINT) AS DONNEES_RESERVE_MO
FROM sys.database_files),
T_DB AS (
SELECT TRANSACTIONS_RESERVEES_MO + DONNEES_RESERVE_MO AS BASE_TAILLE_MO,
DONNEES_RESERVE_MO, TRANSACTIONS_RESERVEES_MO
FROM T_FILES),
T_PAGES AS (
-- taille des pages données et index
SELECT CAST(ROUND(SUM(au.used_pages) / 128.0, 0) AS BIGINT) AS DONNEES_UTILISEES_MO,
CAST(ROUND(SUM(CASE
WHEN it.internal_type IN (202, 204, 211, 212, 213, 214, 215, 216)
THEN 0
WHEN au.TYPE != 1
THEN au.used_pages
WHEN p.index_id < 2
THEN au.data_pages
ELSE 0
END) / 128.0, 0) AS BIGINT) AS TABLES_MO
FROM sys.partitions AS p
INNER JOIN sys.allocation_units au
ON p.partition_id = au.container_id
LEFT OUTER JOIN sys.internal_tables AS it
ON p.object_id = it.object_id)
SELECT BASE_TAILLE_MO,
DONNEES_RESERVE_MO,
DONNEES_UTILISEES_MO,
CAST(100.0 * CAST( DONNEES_UTILISEES_MO AS FLOAT)
/ DONNEES_RESERVE_MO AS DECIMAL(5,2)) AS DONNEES_UTILISEES_POURCENT,
TABLES_MO,
DONNEES_UTILISEES_MO - TABLES_MO AS INDEX_MO,
CAST(100.0 * CAST( TABLES_MO AS FLOAT)
/ DONNEES_UTILISEES_MO AS DECIMAL(5,2)) AS TABLES_POURCENT ,
CAST(100.0 * CAST( DONNEES_UTILISEES_MO - TABLES_MO AS FLOAT)
/ DONNEES_UTILISEES_MO AS DECIMAL(5,2)) AS INDEX_POURCENT,
TRANSACTIONS_RESERVEES_MO,
@TRANSACTIONS_UTILISEES_MO AS TRANSACTIONS_UTILISEES_MO,
@TRANSACTIONS_UTILISEES_POURCENT AS TRANSACTIONS_UTILISEES_POURCENT
FROM T_PAGES CROSS JOIN T_DB;
GO
EXEC sp_MS_marksystemobject 'sp__METRIQUE_STOCKAGE'

View File

@ -0,0 +1,73 @@
USE msdb;
GO
CREATE PROCEDURE dbo.P_BACKUP
@MODE CHAR(1), -- L : LOG, C : COMPLETE, D : DIFFERENTIAL
@PATH NVARCHAR(256),
@DATETIME BIT = 1,
@RETAINSDAYS TINYINT
AS
/******************************************************************************
* NATURE : PROCEDURE *
* OBJECT : msdb.dbo.P_BACKUP *
* CREATE : 2020-06-26 *
* VERSION : 1 *
* SYSTEM :NON *
*******************************************************************************
* Frédéric BROUARD - alias SQLpro - SARL SQL SPOT - SQLpro@sqlspot.com *
* Architecte de données : expertise, audit, conseil, formation, modélisation *
* tuning, sur les SGBD Relationnels, le langage SQL, MS SQL Server/PostGreSQL *
* blog: http://blog.developpez.com/sqlpro site: http://sqlpro.developpez.com *
*******************************************************************************
* PURPOSE : sauvegarde de toutes les bases d'une instance *
*******************************************************************************
* INPUTS : *
* @MODE CHAR(1), L : LOG, C : COMPLETE, D : DIFFERENTIAL *
* @PATH NVARCHAR(256), chemin vers le répertoire de sauvegarde *
* @DATETIME BIT = 1, ajoute ou non la date heure dans le nom du fichier *
* de sauvegarde *
* @RETAINSDAYS TINYINT si pas de dateheure dans le fichier de sauvegarde *
* alors cumule les sauvegarde mais ne retient que les *
* n dernier jours *
*******************************************************************************
* EXEMPLE : *
* EXEC dbo.P_BACKUP 'C', 'C:\DATABASE\BACKUP', 1, NULL; *
******************************************************************************/
SET NOCOUNT ON;
SET @MODE = UPPER(@MODE);
IF RIGHT(@PATH, 1) <> '\'
SET @PATH = @PATH + '\';
DECLARE @SQL NVARCHAR(max) = N'';
SELECT @SQL = @SQL + N'BACKUP ' +
CASE @MODE
WHEN N'L' THEN N'LOG '
ELSE N'DATABASE '
END + N'[' + name + '] TO DISK = ''' + @PATH + name +
CASE @DATETIME
WHEN 1 THEN N'_' + CONVERT(NVARCHAR(32), SYSDATETIME(), 121)
ELSE ''
END +
CASE @MODE
WHEN N'L' THEN N'.TRN'
ELSE N'.BAK'
END + '''' +
CASE WHEN @RETAINSDAYS IS NOT NULL AND @DATETIME = 0
THEN N' WITH RETAINDAYS = ' + CAST(@RETAINSDAYS AS VARCHAR(16))
ELSE N''
END + N';'
FROM sys.databases
WHERE state = 0
AND source_database_id IS NULL
AND name NOT in ('model', 'tempdb')
AND 1 = CASE WHEN @MODE = 'L' AND recovery_model = 3 THEN 0
ELSE 1 END;
GO
--> planif de la sauvegarde FULL tous les jours
EXEC dbo.P_BACKUP 'C', 'C:\DATABASE\BACKUP', 1, NULL;
--> planif de la sauvegarde TRAN toutes les 30 minutes
EXEC dbo.P_BACKUP 'L', 'C:\DATABASE\BACKUP', 1, NULL;

View File

@ -0,0 +1,8 @@
EXEC sp_configure 'cost threshold for parallelism', 5 --> 12 (depuis la version 1999 !)
EXEC sp_configure 'max degree of parallelism', 0 --> 0
EXEC sp_configure 'max server memory (MB)', 64000 --> 0
EXEC sp_configure 'optimize for ad hoc workloads', 1
EXEC sp_configure 'backup compression default', 1
EXEC sp_configure 'backup checksum default', 1
RECONFIGURE

View File

@ -0,0 +1,7 @@
SELECT db.name AS DATABASE_NAME, mf.name AS FILE_NAME,
N'ALTER DATABASE [' + db.name + '] MODIFY FILE ( NAME = [' + mf.name + '], FILEGROWTH = 64 MB);'
FROM sys.master_files AS mf
JOIN sys.databases AS db
ON mf.database_id = db.database_id
WHERE is_percent_growth = 1
AND db.database_id > 4;

View File

@ -0,0 +1,30 @@
-- résoudre le problème des collations "croisées" entre la base de production et la tempdb
-- en plaçant la base en mode d'autonomie partielle (containement = partial)
-- vérifer au niveau du serveur que cette fonctionnalité est activée...
--> (contained database authentication = 1)
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
EXEC sp_configure 'contained database authentication', 1;
RECONFIGURE;
GO
-- ON DOIT MAINTENANT PLACER LA BASE EN MODE D'AUTONOMIE PARTIELLE
-- on se met dans le contexte de la base DB_TEST
USE DB_TEST;
GO
-- on devient l'unique utilisateur de la base
ALTER DATABASE DB_TEST SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
-- on passe dans la base master, la base DB_TEST n'a plus aucun utilisateur
use [master];
GO
ALTER DATABASE [DB_TEST] SET ... -- commande à passer sans aucun utilisateur
GO
-- one repasse la base de données en mode muti utilisateur
ALTER DATABASE [DB_TEST] SET MULTI_USER
GO

View File

@ -0,0 +1,93 @@
USE DB_GRAND_HOTEL;
GO
-- création des espaces de stockage
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_OLD;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_OLD',
FILENAME = 'H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_OLD.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_OLD;
GO
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_2006;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_2006',
FILENAME = 'H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_2006.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_2006;
GO
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_2007;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_2007',
FILENAME = 'H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_2007.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_2007;
GO
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_2008;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_2008',
FILENAME = 'H:\DATABASE_SQL\SQL2019FBIN2\DATA\HOTEL_PART_2008.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_2008;
GO
-- 1) création de la fonction de partitionnement
CREATE PARTITION FUNCTION PF_DATE_FACTURE (DATETIME)
AS RANGE RIGHT
FOR VALUES ('2006-01-01', '2007-01-01', '2008-01-01');
-- 2) création du schéma de répartition
CREATE PARTITION SCHEME PS_DATE_FACTURE
AS PARTITION PF_DATE_FACTURE
TO (FG_PART_OLD, FG_PART_2006, FG_PART_2007, FG_PART_2008);
GO
-- 3) création de l'objet sur la partition
BEGIN TRANSACTION;
BEGIN TRY
--> il faut commencer par retirer la contrainte FK de la table T_FACTURE_ITEM_ITM
ALTER TABLE [S_CHB].[T_FACTURE_ITEM_ITM] DROP CONSTRAINT [FK_T_FACTUR_CONTIENT_T_FACTUR];
--> il faut commencer par retirer la PK !!!
ALTER TABLE [S_CHB].[T_FACTURE_FAC] DROP CONSTRAINT [PK_T_FACTURE_FAC];
--> pas possible !
CREATE UNIQUE CLUSTERED INDEX X ON [S_CHB].[T_FACTURE_FAC] ([FAC_DATE], [FAC_ID])
ON PS_DATE_FACTURE(FAC_DATE);
--> remettre la PK (ATTENTION : par défaut la PK est créée sous forme d'index clustered)
ALTER TABLE [S_CHB].[T_FACTURE_FAC]
ADD CONSTRAINT [PK_T_FACTURE_FAC] PRIMARY KEY NONCLUSTERED ([FAC_ID])
ON FG_DATA;
--> remettre la FK
ALTER TABLE [S_CHB].[T_FACTURE_ITEM_ITM]
ADD CONSTRAINT [FK_T_FACTUR_CONTIENT_T_FACTUR]
FOREIGN KEY ([FAC_ID])
REFERENCES [S_CHB].[T_FACTURE_FAC] (FAC_ID);
COMMIT;
-- tout se passe bien => COMMIT
END TRY
BEGIN CATCH
-- quelque chose se passe mal => ROLLBACK
IF XACT_STATE() <> 0
ROLLBACK;
THROW;
END CATCH
-- voir les stats de lignes des partitions
SELECT *
FROM sys.dm_db_partition_stats AS ps
JOIN sys.indexes AS i
ON ps.object_id = i.object_id AND ps.index_id = i.index_id
WHERE ps.object_id = OBJECT_ID('[S_CHB].[T_FACTURE_FAC]')
AND i.type_desc = 'CLUSTERED';

View File

@ -0,0 +1,146 @@
ALTER DATABASE [DB_GRAND_HOTEL] ADD FILEGROUP FG_DATA;
GO
ALTER DATABASE [DB_GRAND_HOTEL]
ADD FILE (NAME = 'F_DATA_1',
FILENAME = 'H:\DATABASE_SQL\SQL2019FBIN2\DATA\F_DATA_1.ndf',
SIZE = 100 MB,
FILEGROWTH = 64 MB)
TO FILEGROUP FG_DATA;
GO
ALTER DATABASE [DB_GRAND_HOTEL]
ADD FILE (NAME = 'F_DATA_2',
FILENAME = 'H:\DATABASE_SQL\SQL2019FBIN2\DATA\F_DATA_2.ndf',
SIZE = 100 MB,
FILEGROWTH = 64 MB)
TO FILEGROUP FG_DATA;
GO
ALTER DATABASE [DB_GRAND_HOTEL]
MODIFY FILEGROUP FG_DATA DEFAULT;
GO
--> déplacement des tables et index
USE [DB_GRAND_HOTEL]
GO
CREATE UNIQUE CLUSTERED INDEX [PK_T_ADRESSE_ADR] ON [S_PRS].[T_ADRESSE_ADR] ([ADR_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TJ_ADRPAY_FK] ON [S_PRS].[T_ADRESSE_ADR] ([PAY_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [SITUE__ADR__FK] ON [S_PRS].[T_ADRESSE_ADR] ([STT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [ADR_DEFAUT2_FK] ON [S_PRS].[T_ADRESSE_ADR] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_CHAMBRE_CHB] ON [S_CHB].[T_CHAMBRE_CHB] ([CHB_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_CLIENT_CLI] ON [S_CEE].[T_CLIENT_CLI] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_DNS] ON [S_PRS].[T_DNS] ([DNS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_DNS_NAME] ON [S_PRS].[T_DNS] ([DNS_NAME] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_EMAIL_EML] ON [S_PRS].[T_EMAIL_EML] ([EML_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TJ_EMLSTT_FK] ON [S_PRS].[T_EMAIL_EML] ([STT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [LOCALISE_FK] ON [S_PRS].[T_EMAIL_EML] ([DNS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [MAIL_DEFAUT_FK] ON [S_PRS].[T_EMAIL_EML] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_EMPLOYE_EMP] ON [S_CEE].[T_EMPLOYE_EMP] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [AK_UK_EMP_MAT_T_EMPLOY] ON [S_CEE].[T_EMPLOYE_EMP] ([EMP_NATRICULE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [ASSOCIATION_21_FK] ON [S_CEE].[T_EMPLOYE_EMP] ([SVC_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [ASSOCIATION_22_FK] ON [S_CEE].[T_EMPLOYE_EMP] ([T_E_PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [INVESTI_FK] ON [S_CEE].[T_EMPLOYE_EMP] ([FCT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_FACTURE_FAC] ON [S_CHB].[T_FACTURE_FAC] ([FAC_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [PAYE_FK] ON [S_CHB].[T_FACTURE_FAC] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [PAYEE_FK] ON [S_CHB].[T_FACTURE_FAC] ([PMT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_FACTURE_ITEM_ITM] ON [S_CHB].[T_FACTURE_ITEM_ITM] ([ITM_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [CONTIENT_FK] ON [S_CHB].[T_FACTURE_ITEM_ITM] ([FAC_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TAXE_FK] ON [S_CHB].[T_FACTURE_ITEM_ITM] ([TVA_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_J_APPLIQUE_APQ] ON [S_CLD].[T_J_APPLIQUE_APQ] ([PFM_ID] ASC, [PJR_DATE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [L_PJRPFM_FK] ON [S_CLD].[T_J_APPLIQUE_APQ] ([PJR_DATE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_J_TRAVAILLE_TRV] ON [S_CEE].[T_J_TRAVAILLE_TRV] ([PRS_ID] ASC, [PJR_DATE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [T_J_TRAVAILLE_TRV2_FK] ON [S_CEE].[T_J_TRAVAILLE_TRV] ([PJR_DATE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_LIGNE_ADRESSE_LAD] ON [S_PRS].[T_LIGNE_ADRESSE_LAD] ([LAD_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [COMPOSEE_FK] ON [S_PRS].[T_LIGNE_ADRESSE_LAD] ([ADR_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_MANDATAIRE_MDT] ON [S_CEE].[T_MANDATAIRE_MDT] ([MDT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [EST_MANDATEE_FK] ON [S_CEE].[T_MANDATAIRE_MDT] ([T_P_PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [REPRESENTEE_FK] ON [S_CEE].[T_MANDATAIRE_MDT] ([PRS_ID_MORALE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [ASSOCIATION_20_FK] ON [S_CEE].[T_MANDATAIRE_MDT] ([MDA_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_OCCUPATION_OCP] ON [S_CHB].[T_OCCUPATION_OCP] ([CHB_ID] ASC, [PJR_DATE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [OCCUPEE_FK] ON [S_CHB].[T_OCCUPATION_OCP] ([PJR_DATE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [PAR_FK] ON [S_CHB].[T_OCCUPATION_OCP] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_PERSONNE_MORALE_PSM] ON [S_PRS].[T_PERSONNE_MORALE_PSM] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [DOTEE_FK] ON [S_PRS].[T_PERSONNE_MORALE_PSM] ([FMO_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_PERSONNE_PHYSIQUE_PSP] ON [S_PRS].[T_PERSONNE_PHYSIQUE_PSP] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TJ_ATRTIT_FK] ON [S_PRS].[T_PERSONNE_PHYSIQUE_PSP] ([TIT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TJ_ATRSEX_FK] ON [S_PRS].[T_PERSONNE_PHYSIQUE_PSP] ([SEX_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_PERSONNE_PRS] ON [S_PRS].[T_PERSONNE_PRS] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [ADR_DEFAUT_FK] ON [S_PRS].[T_PERSONNE_PRS] ([ADR_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [MAIL_DEFAUT2_FK] ON [S_PRS].[T_PERSONNE_PRS] ([EML_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TEL_DEFAUT2_FK] ON [S_PRS].[T_PERSONNE_PRS] ([TEL_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_PLN_JOUR_PJR] ON [S_CLD].[T_PLN_JOUR_PJR] ([PJR_DATE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [L_PJRPJM_FK1] ON [S_CLD].[T_PLN_JOUR_PJR] ([PJM_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [L_PJRPMS_FK1] ON [S_CLD].[T_PLN_JOUR_PJR] ([PMS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [L_PJRPAN_FK1] ON [S_CLD].[T_PLN_JOUR_PJR] ([PAN_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TJ_PJRPSM_DEBUTE_FK1] ON [S_CLD].[T_PLN_JOUR_PJR] ([PSM_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [L_PJRPJS_FK1] ON [S_CLD].[T_PLN_JOUR_PJR] ([PJS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TJ_PJRPJA_FK1] ON [S_CLD].[T_PLN_JOUR_PJR] ([PJA_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [L_PJRPTR_FK1] ON [S_CLD].[T_PLN_JOUR_PJR] ([PTR_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [L_PJRPST_FK1] ON [S_CLD].[T_PLN_JOUR_PJR] ([PST_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_RESERVATION_RSV] ON [S_CHB].[T_RESERVATION_RSV] ([RSV_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [DEMANDE_FK] ON [S_CHB].[T_RESERVATION_RSV] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [SEJOUR_DEBUTE_FK] ON [S_CHB].[T_RESERVATION_RSV] ([PJR_DATE_DEBUTE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [SEJOUR_TERMINE_FK] ON [S_CHB].[T_RESERVATION_RSV] ([PJR_DATE_FINIE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_FONCTION_FCT] ON [S_CEE].[T_R_FONCTION_FCT] ([FCT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_FCT_CODE] ON [S_CEE].[T_R_FONCTION_FCT] ([FCT_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_FORME_ORGANISATION_FMO] ON [S_PRS].[T_R_FORME_ORGANISATION_FMO] ([FMO_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_FMO_CODE] ON [S_PRS].[T_R_FORME_ORGANISATION_FMO] ([FMO_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UQ__DUAL__395884C4] ON [dbo].[DUAL] ([C] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_MANDAT_MDA] ON [S_CEE].[T_R_MANDAT_MDA] ([MDA_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_MDA_CODE] ON [S_CEE].[T_R_MANDAT_MDA] ([MDA_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_MODE_PAIEMENT_PMT] ON [S_CHB].[T_R_MODE_PAIEMENT_PMT] ([PMT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_PMT_CODE] ON [S_CHB].[T_R_MODE_PAIEMENT_PMT] ([PMT_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PAYS_PAY] ON [S_PRS].[T_R_PAYS_PAY] ([PAY_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_PAY_CODE] ON [S_PRS].[T_R_PAYS_PAY] ([PAY_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_ANNEE_PAN] ON [S_CLD].[T_R_PLN_ANNEE_PAN] ([PAN_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_JOUR_ANNEE_PJA] ON [S_CLD].[T_R_PLN_JOUR_ANNEE_PJA] ([PJA_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_JOUR_FERIE_FIXE_PJF] ON [S_CLD].[T_R_PLN_JOUR_FERIE_FIXE_PJF] ([PJM_ID] ASC, [PMS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [L_PJFPMS_FK] ON [S_CLD].[T_R_PLN_JOUR_FERIE_FIXE_PJF] ([PMS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_JOUR_FERIE_MOBILE_P] ON [S_CLD].[T_R_PLN_JOUR_FERIE_MOBILE_PFM] ([PFM_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_JOUR_MOIS_PJM] ON [S_CLD].[T_R_PLN_JOUR_MOIS_PJM] ([PJM_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_JOUR_SEMAINE_PJS] ON [S_CLD].[T_R_PLN_JOUR_SEMAINE_PJS] ([PJS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_MOIS_PMS] ON [S_CLD].[T_R_PLN_MOIS_PMS] ([PMS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_SEMAINE_PSM] ON [S_CLD].[T_R_PLN_SEMAINE_PSM] ([PSM_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_SEMESTRE_PST] ON [S_CLD].[T_R_PLN_SEMESTRE_PST] ([PST_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_PLN_TRIMESTRE_PTR] ON [S_CLD].[T_R_PLN_TRIMESTRE_PTR] ([PTR_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_SERVICE_SCV] ON [S_CEE].[T_R_SERVICE_SCV] ([SVC_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_SEXE_SEX] ON [S_PRS].[T_R_SEXE_SEX] ([SEX_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_SEX_CODE] ON [S_PRS].[T_R_SEXE_SEX] ([SEX_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_SITUATION_STT] ON [S_PRS].[T_R_SITUATION_STT] ([STT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_STT_CODE] ON [S_PRS].[T_R_SITUATION_STT] ([STT_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_TITRE_TIT] ON [S_PRS].[T_R_TITRE_TIT] ([TIT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_TIT_CODE] ON [S_PRS].[T_R_TITRE_TIT] ([TIT_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_R_TYPE_TELEPHONE_TTL] ON [S_PRS].[T_R_TYPE_TELEPHONE_TTL] ([TTL_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE INDEX [UK_TTL_CODE] ON [S_PRS].[T_R_TYPE_TELEPHONE_TTL] ([TTL_CODE] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_TELEPHONE_TEL] ON [S_PRS].[T_TELEPHONE_TEL] ([TEL_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TJ_TELSTT_FK] ON [S_PRS].[T_TELEPHONE_TEL] ([STT_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TJ_TELTTL_FK] ON [S_PRS].[T_TELEPHONE_TEL] ([TTL_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE INDEX [TEL_DEFAUT_FK] ON [S_PRS].[T_TELEPHONE_TEL] ([PRS_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
CREATE UNIQUE CLUSTERED INDEX [PK_T_TVA] ON [S_CHB].[T_TVA] ([TVA_ID] ASC) WITH (DROP_EXISTING = ON) ON FG_DATA;
GO
ALTER TABLE [dbo].[DUAL] ADD _ INT IDENTITY NOT NULL CONSTRAINT UKsupertoto UNIQUE CLUSTERED ON FG_DATA;ALTER TABLE [dbo].[DUAL] DROP CONSTRAINT UKsupertoto;ALTER TABLE [dbo].[DUAL] DROP COLUMN _;
GO
-- redimensionnement des fichiers journal et mdf
USE [DB_GRAND_HOTEL]
GO
DBCC SHRINKFILE (N'DB_GRAND_HOTEL' , 10)
GO
USE [master]
GO
ALTER DATABASE [DB_GRAND_HOTEL]
MODIFY FILE ( NAME = N'DB_GRAND_HOTEL_log',
SIZE = 102400KB )
GO
-- retaillage du pas d'incrément des fichiers journal et mdf
USE [master]
GO
ALTER DATABASE [DB_GRAND_HOTEL]
MODIFY FILE ( NAME = N'DB_GRAND_HOTEL',
FILEGROWTH = 10240KB )
GO
ALTER DATABASE [DB_GRAND_HOTEL]
MODIFY FILE ( NAME = N'DB_GRAND_HOTEL_log',
FILEGROWTH = 65536KB )
GO

View File

@ -0,0 +1,7 @@
M. MASSICARD Benoît : 60 ans DBA 30 ans, DB2 Linux/Zos, Oracle, Sybase IQ (groupe CORA)
M. FAUCON Dominique : 56 ans Inge Systeme. AN ingé système/ réseau. (Assemblée nationale)
M. LAGRENE Anthony : 28 ans, Oracle 70%, progiciel SQL Server (Lidl)
M. DUCAMPS Vincent : Inge systeme, désireux d'investir dans le DBA (consort NT)
M. AUTRAN Jean Marc : (NEONITY) hébergement données expert comptable et PME

Binary file not shown.

View File

@ -0,0 +1,10 @@
-- transaction la plus ancienne dans la base contextuelle
DBCC OPENTRAN WITH TABLERESULTS
-- transaction de plus de n secondes (ici 30) par bases
SELECT DB_NAME(database_id), DATEDIFF(second, transaction_begin_time, GETDATE()) AS DURATION_SECOND, *
FROM sys.dm_tran_active_transactions AS tat
JOIN sys.dm_tran_database_transactions AS tdb
ON tat.transaction_id = tdb.transaction_id
WHERE database_id > 4
AND DATEDIFF(second, transaction_begin_time, GETDATE()) > 30;

View File

@ -0,0 +1,17 @@
-- voir le contenu du JT de la base contextuelle
SELECT *
FROM sys.fn_dblog(NULL, NULL)
-- voir les VLF du JT de la base contextuelle
DBCC LOGINFO
SELECT * FROM sys.dm_db_log_info(NULL) --> pour une autre base mettre le database_id
-- retailler un JT qui a trop de VLF :
1) passer en mode de journalisation SIMPLE (ALTER DATABASE .... SET RECOVERY SIMPLE)
2) réducation au minimum de la taille du fichier journal DBCC SHRINKFILE(fichier_jt, 1);
3) donner une taille adéquate au fichier du JT ALTER DATABASE .... MODIFIY FILE (NAME = ...., SIZE = ...);

View File

@ -0,0 +1,11 @@
-- connaitre les bases ayant une collation différente de celle du serveur
SELECT name, collation_name, SERVERPROPERTY('Collation') AS Server_collation
FROM sys.databases
WHERE collation_name <> SERVERPROPERTY('Collation')
-- bases ayant un AUTO close ou AUTO shrink
SELECT name, N'ALTER DATABASE [' + name + '] SET AUTO_CLOSE OFF;ALTER DATABASE [' + name + '] SET AUTO_SHRINK OFF;'
FROM sys.databases
WHERE database_id > 4
AND (is_auto_shrink_on = 1
OR is_auto_close_on = 1);

View File

@ -0,0 +1,14 @@
-- on se met dans le contexte de la base DB_TEST
USE DB_TEST;
GO
-- on devient l'unique utilisateur de la base
ALTER DATABASE DB_TEST SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
-- on passe dans la base master, la base DB_TEST n'a plus aucun utilisateur
use [master];
GO
ALTER DATABASE [DB_TEST] SET ... -- commande à passer sans aucun utilisateur
GO
-- one repasse la base de données en mode muti utilisateur
ALTER DATABASE [DB_TEST] SET MULTI_USER
GO

24
IT/SQL/add_partition.sql Normal file
View File

@ -0,0 +1,24 @@
-- ajouter partition
use [DB_GRAND_HOTEL]
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_2009;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_2009',
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019FBIN2\MSSQL\DATA\FG_2009.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_2009;
GO
--2) modifier le schema de partitionnement
ALTER PARTITION SCHEME PS_DATE_FACTURE
NEXT USED FG_PART_2009;
GO
-- 3) ajouter le piquet de partitionnement
ALTER PARTITION FUNCTION PF_Date_facture()
SPLIT RANGE ('2019-01-01')
GO

195
IT/SQL/cheasheet.md Normal file
View File

@ -0,0 +1,195 @@
# Quick SQL Cheatsheet
A quick reminder of all relevant SQL queries and examples on how to use them.
This repository is constantly being updated and added to by the community.
Pull requests are welcome. Enjoy!
# Table of Contents
1. [ Finding Data Queries. ](#find)
2. [ Data Modification Queries. ](#modify)
3. [ Reporting Queries. ](#report)
4. [ Join Queries. ](#joins)
5. [ View Queries. ](#view)
6. [ Altering Table Queries.](#alter)
7. [ Creating Table Query.](#create)
<a name="find"></a>
# 1. Finding Data Queries
### **SELECT**: used to select data from a database
* `SELECT` * `FROM` table_name;
### **DISTINCT**: filters away duplicate values and returns rows of specified column
* `SELECT DISTINCT` column_name;
### **WHERE**: used to filter records/rows
* `SELECT` column1, column2 `FROM` table_name `WHERE` condition;
* `SELECT` * `FROM` table_name `WHERE` condition1 `AND` condition2;
* `SELECT` * `FROM` table_name `WHERE` condition1 `OR` condition2;
* `SELECT` * `FROM` table_name `WHERE NOT` condition;
* `SELECT` * `FROM` table_name `WHERE` condition1 `AND` (condition2 `OR` condition3);
* `SELECT` * `FROM` table_name `WHERE EXISTS` (`SELECT` column_name `FROM` table_name `WHERE` condition);
### **ORDER BY**: used to sort the result-set in ascending or descending order
* `SELECT` * `FROM` table_name `ORDER BY` column;
* `SELECT` * `FROM` table_name `ORDER BY` column `DESC`;
* `SELECT` * `FROM` table_name `ORDER BY` column1 `ASC`, column2 `DESC`;
### **SELECT TOP**: used to specify the number of records to return from top of table
* `SELECT TOP` number columns_names `FROM` table_name `WHERE` condition;
* `SELECT TOP` percent columns_names `FROM` table_name `WHERE` condition;
* Not all database systems support `SELECT TOP`. The MySQL equivalent is the `LIMIT` clause
* `SELECT` column_names `FROM` table_name `LIMIT` offset, count;
### **LIKE**: operator used in a WHERE clause to search for a specific pattern in a column
* % (percent sign) is a wildcard character that represents zero, one, or multiple characters
* _ (underscore) is a wildcard character that represents a single character
* `SELECT` column_names `FROM` table_name `WHERE` column_name `LIKE` pattern;
* `LIKE` a% (find any values that start with “a”)
* `LIKE` %a (find any values that end with “a”)
* `LIKE` %or% (find any values that have “or” in any position)
* `LIKE` _r% (find any values that have “r” in the second position)
* `LIKE` a_%_% (find any values that start with “a” and are at least 3 characters in length)
* `LIKE` [a-c]% (find any values starting with “a”, “b”, or “c”
### **IN**: operator that allows you to specify multiple values in a WHERE clause
* essentially the IN operator is shorthand for multiple OR conditions
* `SELECT` column_names `FROM` table_name `WHERE` column_name `IN` (value1, value2, …);
* `SELECT` column_names `FROM` table_name `WHERE` column_name `IN` (`SELECT STATEMENT`);
### **BETWEEN**: operator selects values within a given range inclusive
* `SELECT` column_names `FROM` table_name `WHERE` column_name `BETWEEN` value1 `AND` value2;
* `SELECT` * `FROM` Products `WHERE` (column_name `BETWEEN` value1 `AND` value2) `AND NOT` column_name2 `IN` (value3, value4);
* `SELECT` * `FROM` Products `WHERE` column_name `BETWEEN` #01/07/1999# AND #03/12/1999#;
### **NULL**: values in a field with no value
* `SELECT` * `FROM` table_name `WHERE` column_name `IS NULL`;
* `SELECT` * `FROM` table_name `WHERE` column_name `IS NOT NULL`;
### **AS**: aliases are used to assign a temporary name to a table or column
* `SELECT` column_name `AS` alias_name `FROM` table_name;
* `SELECT` column_name `FROM` table_name `AS` alias_name;
* `SELECT` column_name `AS` alias_name1, column_name2 `AS` alias_name2;
* `SELECT` column_name1, column_name2 + , + column_name3 `AS` alias_name;
### **UNION**: set operator used to combine the result-set of two or more SELECT statements
* Each SELECT statement within UNION must have the same number of columns
* The columns must have similar data types
* The columns in each SELECT statement must also be in the same order
* `SELECT` columns_names `FROM` table1 `UNION SELECT` column_name `FROM` table2;
* `UNION` operator only selects distinct values, `UNION ALL` will allow duplicates
### **INTERSECT**: set operator which is used to return the records that two SELECT statements have in common
* Generally used the same way as **UNION** above
* `SELECT` columns_names `FROM` table1 `INTERSECT SELECT` column_name `FROM` table2;
### **EXCEPT**: set operator used to return all the records in the first SELECT statement that are not found in the second SELECT statement
* Generally used the same way as **UNION** above
* `SELECT` columns_names `FROM` table1 `EXCEPT SELECT` column_name `FROM` table2;
### **ANY|ALL**: operator used to check subquery conditions used within a WHERE or HAVING clauses
* The `ANY` operator returns true if any subquery values meet the condition
* The `ALL` operator returns true if all subquery values meet the condition
* `SELECT` columns_names `FROM` table1 `WHERE` column_name operator (`ANY`|`ALL`) (`SELECT` column_name `FROM` table_name `WHERE` condition);
### **GROUP BY**: statement often used with aggregate functions (COUNT, MAX, MIN, SUM, AVG) to group the result-set by one or more columns
* `SELECT` column_name1, COUNT(column_name2) `FROM` table_name `WHERE` condition `GROUP BY` column_name1 `ORDER BY` COUNT(column_name2) DESC;
### **HAVING**: this clause was added to SQL because the WHERE keyword could not be used with aggregate functions
* `SELECT` `COUNT`(column_name1), column_name2 `FROM` table `GROUP BY` column_name2 `HAVING` `COUNT(`column_name1`)` > 5;
### **WITH**: often used for retrieving hierarchical data or re-using temp result set several times in a query. Also referred to as "Common Table Expression"
* `WITH RECURSIVE` cte `AS` (<br/>
&nbsp;&nbsp;`SELECT` c0.* `FROM` categories `AS` c0 `WHERE` id = 1 `# Starting point`<br/>
&nbsp;&nbsp;`UNION ALL`<br/>
&nbsp;&nbsp;`SELECT` c1.* `FROM` categories `AS` c1 `JOIN` cte `ON` c1.parent_category_id = cte.id<br/>
)<br/>
`SELECT` *<br/>
`FROM` cte
<a name="modify"></a>
# 2. Data Modification Queries
### **INSERT INTO**: used to insert new records/rows in a table
* `INSERT INTO` table_name (column1, column2) `VALUES` (value1, value2);
* `INSERT INTO` table_name `VALUES` (value1, value2 …);
### **UPDATE**: used to modify the existing records in a table
* `UPDATE` table_name `SET` column1 = value1, column2 = value2 `WHERE` condition;
* `UPDATE` table_name `SET` column_name = value;
### **DELETE**: used to delete existing records/rows in a table
* `DELETE FROM` table_name `WHERE` condition;
* `DELETE` * `FROM` table_name;
<a name="report"></a>
# 3. Reporting Queries
### **COUNT**: returns the # of occurrences
* `SELECT COUNT (DISTINCT` column_name`)`;
### **MIN() and MAX()**: returns the smallest/largest value of the selected column
* `SELECT MIN (`column_names`) FROM` table_name `WHERE` condition;
* `SELECT MAX (`column_names`) FROM` table_name `WHERE` condition;
### **AVG()**: returns the average value of a numeric column
* `SELECT AVG (`column_name`) FROM` table_name `WHERE` condition;
### **SUM()**: returns the total sum of a numeric column
* `SELECT SUM (`column_name`) FROM` table_name `WHERE` condition;
<a name="joins"></a>
# 4. Join Queries
### **INNER JOIN**: returns records that have matching value in both tables
* `SELECT` column_names `FROM` table1 `INNER JOIN` table2 `ON` table1.column_name=table2.column_name;
* `SELECT` table1.column_name1, table2.column_name2, table3.column_name3 `FROM` ((table1 `INNER JOIN` table2 `ON` relationship) `INNER JOIN` table3 `ON` relationship);
### **LEFT (OUTER) JOIN**: returns all records from the left table (table1), and the matched records from the right table (table2)
* `SELECT` column_names `FROM` table1 `LEFT JOIN` table2 `ON` table1.column_name=table2.column_name;
### **RIGHT (OUTER) JOIN**: returns all records from the right table (table2), and the matched records from the left table (table1)
* `SELECT` column_names `FROM` table1 `RIGHT JOIN` table2 `ON` table1.column_name=table2.column_name;
### **FULL (OUTER) JOIN**: returns all records when there is a match in either left or right table
* `SELECT` column_names `FROM` table1 ``FULL OUTER JOIN`` table2 `ON` table1.column_name=table2.column_name;
### **Self JOIN**: a regular join, but the table is joined with itself
* `SELECT` column_names `FROM` table1 T1, table1 T2 `WHERE` condition;
<a name="view"></a>
# 5. View Queries
### **CREATE**: create a view
* `CREATE VIEW` view_name `AS SELECT` column1, column2 `FROM` table_name `WHERE` condition;
### **SELECT**: retrieve a view
* `SELECT` * `FROM` view_name;
### **DROP**: drop a view
* `DROP VIEW` view_name;
<a name="alter"></a>
# 6. Altering Table Queries
### **ADD**: add a column
* `ALTER TABLE` table_name `ADD` column_name column_definition;
### **MODIFY**: change data type of column
* `ALTER TABLE` table_name `MODIFY` column_name column_type;
### **DROP**: delete a column
* `ALTER TABLE` table_name `DROP COLUMN` column_name;
<a name="create"></a>
# 7. Creating Table Query
### **CREATE**: create a table
* `CREATE TABLE` table_name `(` <br />
`column1` `datatype`, <br />
`column2` `datatype`, <br />
`column3` `datatype`, <br />
`column4` `datatype`, <br />
`);`

83
IT/SQL/commande.md Normal file
View File

@ -0,0 +1,83 @@
# commande SQL formation
voir version:
`select @@version`
passer une base en mode single user
` ALTER DATABASE [toto] SET SINGLE_USER WITH ROLLBACK IMMEDIATE`
passer en mode multi user
`alter database [toto] set MULTI_USER`
voir la configuration
`exec sp_configure`
activé affichage des option avancé
```SQL
exec sp_configure 'show advanced options', 1;
REconfigure;
```
autorisé la mise en autonomie partielle des bases
```SQL
exec sp_configure 'contained database authentication',1;
reconfigure
```
```SQL
exec sp_configure 'cost threshold for parallelism',12
exec sp_configure 'max degree of parallelism',2
exec sp_configure 'max server memory (MB)', 4096
exec sp_configure'optimize for ad hoc workloads',1
exec sp_configure'backup compression default',1
exec sp_configure'backup checksum default',1
exec sp_configure
reconfigure
```
Backup
```sql
BACKUP DATABASE [toto] TO DISK = 'c:\toto.bak' with Compression
```
ajouter filegroup
```SQL
USE [master]
GO
ALTER DATABASE [DB_GRAND_HOTEL] ADD FILEGROUP [FG_DATA]
GO
ALTER DATABASE [DB_GRAND_HOTEL] ADD FILE ( NAME = N'FG_DATA_1', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019FBIN2\MSSQL\DATA\FD_DATA_1.mdf' , SIZE = 102400KB , FILEGROWTH = 65536KB ) TO FILEGROUP [FG_DATA]
GO
ALTER DATABASE [DB_GRAND_HOTEL] ADD FILE ( NAME = N'FG_DATA_2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019FBIN2\MSSQL\DATA\FD_DATA_2.mdf' , SIZE = 102400KB , FILEGROWTH = 65536KB ) TO FILEGROUP [FG_DATA]
GO
```
setter filegroup par default
```SQL
USE [DB_GRAND_HOTEL]
GO
IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N'FG_DATA') ALTER DATABASE [DB_GRAND_HOTEL] MODIFY FILEGROUP [FG_DATA] DEFAULT
GO
```
modifier taille de fichier
```SQL
USE [DB_GRAND_HOTEL]
GO
DBCC SHRINKFILE (N'DB_GRAND_HOTEL' , 10)
GO
USE [master]
GO
ALTER DATABASE [DB_GRAND_HOTEL] MODIFY FILE ( NAME = N'DB_GRAND_HOTEL_log', SIZE = 102400KB )
GO
```
voir nombre de page utilisé et temps
```SQL
SET STATISTCS IO ON
SET STATISTCS TIME ON
```

View File

@ -0,0 +1,79 @@
USE DB_GRAND_HOTEL;
GO
-- création des espaces de stockage
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_OLD;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_OLD',
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019FBIN2\MSSQL\DATA\HOTEL_PART_OLD.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_OLD;
GO
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_2006;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_2006',
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019FBIN2\MSSQL\DATA\HOTEL_PART_2006.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_2006;
GO
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_2007;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_2007',
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019FBIN2\MSSQL\DATA\HOTEL_PART_2007.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_2007;
GO
ALTER DATABASE DB_GRAND_HOTEL ADD FILEGROUP FG_PART_2008;
ALTER DATABASE DB_GRAND_HOTEL
ADD FILE (NAME = 'F_PART_2008',
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019FBIN2\MSSQL\DATA\HOTEL_PART_2008.ndf',
SIZE = 25 MB,
FILEGROWTH = 10 MB)
TO FILEGROUP FG_PART_2008;
GO
-- 1) création de la fonction de partitionnement
CREATE PARTITION FUNCTION PF_DATE_FACTURE (DATETIME)
AS RANGE RIGHT
FOR VALUES ('2006-01-01', '2007-01-01', '2008-01-01');
-- 2) création du schéma de répartition
CREATE PARTITION SCHEME PS_DATE_FACTURE
AS PARTITION PF_DATE_FACTURE
TO (FG_PART_OLD, FG_PART_2006, FG_PART_2007, FG_PART_2008);
-- 3) création de l'objet sur la partition
BEGIN TRANSACTION;
BEGIN TRY
--> il faut commencer par retirer la contrainte FK de la table T_FACTURE_ITEM_ITM
ALTER TABLE [S_CHB].[T_FACTURE_ITEM_ITM] DROP CONSTRAINT [FK_T_FACTUR_CONTIENT_T_FACTUR];
--> il faut commencer par retirer la PK !!!
ALTER TABLE [S_CHB].[T_FACTURE_FAC] DROP CONSTRAINT [PK_T_FACTURE_FAC];
--> pas possible !
CREATE UNIQUE CLUSTERED INDEX X ON [S_CHB].[T_FACTURE_FAC] ([FAC_DATE], [FAC_ID])
ON PS_DATE_FACTURE(FAC_DATE);
--> remettre la PK (ATTENTION : par défaut la PK est créée sous forme d'index clustered)
ALTER TABLE [S_CHB].[T_FACTURE_FAC]
ADD CONSTRAINT [PK_T_FACTURE_FAC] PRIMARY KEY NONCLUSTERED ([FAC_ID])
ON FG_DATA;
--> remettre la FK
ALTER TABLE [S_CHB].[T_FACTURE_ITEM_ITM]
ADD CONSTRAINT [FK_T_FACTUR_CONTIENT_T_FACTUR]
FOREIGN KEY ([FAC_ID])
REFERENCES [S_CHB].[T_FACTURE_FAC] (FAC_ID);
COMMIT;
-- tout se passe bien => COMMIT
END TRY
BEGIN CATCH
-- quelque chose se passe mal => ROLLBACK
IF XACT_STATE() <> 0
ROLLBACK;
THROW;
END CATCH

48
IT/SQL/show_partition.SQL Normal file
View File

@ -0,0 +1,48 @@
SELECT s.name AS TABLE_SCHEMA,
o.name AS TABLE_NAME,
i.name AS INDEX_NAME,
f.name AS PARTITION_FUNCTION,
ps.name AS PARTITION_SCHEMA,
p.partition_number AS PART_NUM,
fg.name AS FILE_GROUP,
rows AS ROW_COUNT,
SUM(dbf.size) OVER(PARTITION BY fg.name) AS PAGE_COUNT,
au.total_pages AS USED_PAGES,
CASE boundary_value_on_right
WHEN 1
THEN 'RIGHT'
ELSE 'LEFT'
END AS RANGE,
rv1.value AS LOW_VALUE,
rv2.value AS HIGH_VALUE
FROM sys.partitions p
JOIN sys.indexes i
ON p.object_id = i.object_id
AND p.index_id = i.index_id
JOIN sys.objects AS o
ON i.object_id = o.object_id
JOIN sys.schemas AS s
ON o.schema_id = s.schema_id
JOIN sys.partition_schemes ps
ON ps.data_space_id = i.data_space_id
JOIN sys.partition_functions f
ON f.function_id = ps.function_id
JOIN sys.destination_data_spaces dds
ON dds.partition_scheme_id = ps.data_space_id
AND dds.destination_id = p.partition_number
JOIN sys.filegroups fg
ON dds.data_space_id = fg.data_space_id
JOIN sys.database_files AS dbf
ON dbf.data_space_id = fg.data_space_id
JOIN sys.allocation_units au
ON au.container_id = p.partition_id
LEFT OUTER JOIN sys.partition_range_values rv2
ON f.function_id = rv2.function_id
AND p.partition_number = rv2.boundary_id
LEFT OUTER JOIN sys.partition_range_values rv1
ON f.function_id = rv1.function_id
AND p.partition_number - 1 = rv1.boundary_id
ORDER BY TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
LOW_VALUE;

5
IT/Soft_util.md Normal file
View File

@ -0,0 +1,5 @@
# Soft util
## tunelling
[chisel](https://github.com/jpillora/chisel) permet de mettre e,n place un tunel TCP sur du http

5
IT/WebDesign/CSS.md Normal file
View File

@ -0,0 +1,5 @@
# CSS
## link
[Bulma style sheet](https://devhints.io/bulma)
[Remove.bg](https://www.remove.bg/)

View File

@ -0,0 +1,151 @@
# Refactoring UI: The Book
> Learn how to design awesome UIs by yourself using specific tactics explained from a developer's point-of-view.
Adapted from our book and video series, [Refactoring UI](moz-extension://3cbfa461-97cf-4e13-8308-7d355e1bc48c/book).
Ever used one of those fancy color palette generators? You know, the ones where you pick a starting color, tweak some options that probably include some musical jargon like "triad" or "major fourth", and are then bestowed the five perfect color swatches you should use to build your website?
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-01.png)
This calculated and scientific approach to picking the perfect color scheme is extremely seductive, but not very useful.
Well, unless you want your site to look like this:
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-02.png)
What you actually need
----------------------
You can't build anything with five hex codes. To build something real, you need a much more comprehensive set of colors to choose from.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-03.png)
You can break a good color palette down into three categories.
### Greys
Text, backgrounds, panels, form controls — almost everything in an interface is grey.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-04.png)
You'll need more greys than you think, too — three or four shades might sound like plenty but it won't be long before you wish you had something a little darker than shade #2 but a little lighter than shade #3.
In practice, you want 8-10 shades to choose from (more on this later). Not so many that you waste time deciding between shade #77 and shade #78, but enough to make sure you don't have to compromise too much .
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-05.png)
True black tends to look pretty unnatural, so start with a really dark grey and work your way up to white in steady increments.
### Primary color(s)
Most sites need one, _maybe_ two colors that are used for primary actions, emphasizing navigation elements, etc. These are the colors that determine the overall look of a site — the ones that make you think of Facebook as "blue", even though it's really mostly grey.
Just like with greys, you need a variety _(5-10)_ of lighter and darker shades to choose from.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-06.png)
Ultra-light shades can be useful as a tinted background for things like alerts, while darker shades work great for text.
### Accent colors
On top of primary colors, every site needs a few _accent_ colors for communicating different things to the user.
For example, you might want to use an eye-grabbing color like yellow, pink, or teal to highlight a new feature:
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-07.png)
You might also need colors to emphasize different semantic _states_, like red for confirming a destructive action:
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-08.png)
...yellow for a warning message:
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-09.png)
...or green to highlight a positive trend:
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/whats-in-a-color-palette-10.png)
You'll want multiple shades for these colors too, even though they should be used pretty sparingly throughout the UI.
If you're building something where you need to use color to distinguish or categorize similar elements (like lines on graphs, events in a calendar, or tags on a project), you might need even more accent colors.
All in, it's not uncommon to need as many as _ten_ different colors with _5-10 shades each_ for a complex UI.
Define your shades up front
---------------------------
When you need to create a lighter or darker variation of a color in your palette, don't get clever using CSS preprocessor functions like "lighten" or "darken" to create shades on the fly. That's how you end up with 35 _slightly_ different blues that all look the same.
Instead, define a fixed set of shades up front that you can choose from as you work.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/define-your-shades-up-front-01.png)
So how do you put together a palette like this anyways?
Choose the base color first
---------------------------
Start by picking a _base_ color for the scale you want to create — the color in the middle that your lighter and darker shades are based on.
There's no real scientific way to do this, but for primary and accent colors, a good rule of thumb is to pick a shade that would work well as a button background.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/define-your-shades-up-front-02.png)
It's important to note that there are no real rules here like "start at 50% lightness" or anything — every color behaves a bit differently, so you'll have to rely on your eyes for this one.
Finding the edges
-----------------
Next, pick your darkest shade and your lightest shade. There's no real science to this either, but it helps to think about where they will be used and choose them using that context.
The darkest shade of a color is usually reserved for text, while the lightest shade might be used to tint the background of an element.
A simple alert component is a good example that combines both of these use cases, so it can be a great place to pick these colors.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/define-your-shades-up-front-03.png)
Start with a color that matches the hue of your base color, and adjust the saturation and lightness until you're satisfied.
Filling in the gaps
-------------------
Once you've got your base, darkest, and lightest shades, you just need to fill in the gaps in between them.
For most projects, you'll need at least 5 shades per color, and probably closer to 10 if you don't want to feel too constrained.
Nine is a great number because it's easy to divide and makes filling in the gaps a little more straightforward. Let's call our darkest shade _900_, our base shade _500_, and our lightest shade _100_.
Start by picking shades _700_ and _300_, the ones right in the middle of the gaps. You want these shades to feel like the perfect compromise between the shades on either side.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/define-your-shades-up-front-04.png)
This creates four more holes in the scale (_800_, _600_, _400_, and _200_), which you can fill using the same approach.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/define-your-shades-up-front-05.png)
You should end up with a pretty balanced set of colors that provide just enough options to accommodate your design ideas without feeling limiting.
What about greys?
-----------------
With greys the base color isn't as important, but otherwise the process is the same. Start at the edges and fill in the gaps until you have what you need.
![](https://refactoring-ui.nyc3.cdn.digitaloceanspaces.com/previews/define-your-shades-up-front-06.png)
Pick your darkest grey by choosing a color for the darkest text in your project, and your lightest grey by choosing something that works well for a subtle off-white background.
It's not a science
------------------
As tempting as it is, you can't rely purely on math to craft the perfect color palette.
A systematic approach like the one described above is great to get you started, but don't be afraid to make little tweaks if you need to.
Once you actually start using your colors in your designs, it's almost inevitable that you'll want to tweak the saturation on a shade, or make a couple of shades lighter or darker. Trust your eyes, not the numbers.
Just try to avoid adding _new_ shades too often if you can avoid it. If you're not dilligent about limiting your palette, you might as well have no color system at all.
[Source](https://refactoringui.com/previews/building-your-color-palette/)

View File

@ -0,0 +1 @@
[Rock Solid HTML Emails ◆ 24 ways](https://24ways.org/2009/rock-solid-html-emails)

2
IT/WebDesign/html.md Normal file
View File

@ -0,0 +1,2 @@
[HTMl Reference](https://htmlreference.io/)
[CSS References](https://cssreference.io/)

21
IT/ansible.md Normal file
View File

@ -0,0 +1,21 @@
# ansible
## commande ansible
***launch playbook on staging***
`ansible-playbook -i staging site.yml --vault-password-file=.vaultpassword`
***init new role architecture***
`ansible-galaxy user_config init `
***launch ansible bootstrap***
` ansible-playbook -i nas, bootstrap.yml -u root --ask-pass`
***encrypt string***
`ansible-vault encrypt_string`
## ignore know host file on ansible
```
export ANSIBLE_HOST_KEY_CHECKING=False
--ssh-extra-args='-o GlobalKnownHostsFile=/dev/null -o UserKnownHostsFile=/dev/null'
```

130
IT/docker/index.md Normal file
View File

@ -0,0 +1,130 @@
# docker
## concept
- image: template en lecture seul pouvant créer un container
- conteneur: instance d'une image
## command
- telecharger une image: ` docker pull debian:$tag`
- crer et lancer un container en tty ` docker run -it bash `
- arreter container: `docker stop`
- démarrer containeur: `docker start`
- lister tous les conteneur `docker ps -a`
- delete conteneur: `docker rm 2cdc
- run a commad in container: `docker exec `
- voir stdout d'un container: `docker logs`
- créer et lancer un conteneur `docker run -d --name node-app -p 3000:3000 -v $(pwd):/app node:0.12.4 node /app/server.js`
- -d lance en tant que daemon
- --name permet de nommer le conteneur
- -p associe le port au port de l'hote (1er port pour l'hote et second pour le conteneur)
- -v $(pwd):/app : cette option permet de partager un dossier avec votre conteneur, ici, nous partageons le dossier courant (où se trouve notre fichier server.js) avec le dossier /app dans le conteneur (attention si vous êtes sur Mac ou Windows uniquement votre 'home' est partagé).
- node:0.12.4 : l'image Docker que vous voulez utiliser.
- node /app/server.js : la commande à exécuter dans le conteneur.
## dockerfile
```conf
#FROM permet de définir notre image de base, vous pouvez l'utiliser seulement une fois dans un Dockerfile.
FROM debian
#RUN permet d'exécuter une commande à l'intérieur de votre image comme si vous étiez devant un shell unix.
RUN apt-get update \
&& apt-get install -y curl xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN curl -LO "https://nodejs.org/dist/v12.2.0/node-v12.2.0-linux-x64.tar.xz" \
&& tar -xJf node-v12.2.0-linux-x64.tar.xz -C /usr/local --strip-components=1 \
&& rm node-v12.2.0-linux-x64.tar.xz
#copy permet d'ajouter des fichiers locaux ou distants à l'intérieur de votre image, il est le plus souvent utilisé pour importer les sources de votre projet ou des fichiers de configuration.
COPY package.json /app/
#WORKDIR permet de changer le répertoire courant de votre image, toutes les commandes qui suivront seront exécutées à partir de ce répertoire.
WORKDIR /app
RUN npm install
#ajoute le répertoire a projet a l'image
COPY . /app/
#ADD permet de rcupérer des source depuis un url ou d'extraire une archive
#EXPOSE et VOLUME permettent respectivement d'indiquer quel port et quel dossier nous souhaitons partager.
EXPOSE 3000
VOLUME /app/log
#USER permet de selectionner kl'utilisateur qui lancera le service
#ENTRYPOINT ["executable", "param1", "param2"] sets the command and parameters that will be executed first when a container is run.
#instruction qui doit s'exécuter au lancement du conteneur
CMD node server.js
```
le .dockerignore permet comme un .gitignore de ne pas inclure certain fichiers dans l'image Docker,
## créer image
- lancer la build du fichier: `docker build -t nomducontainer:tag .`
- -t permet de nommer l'image docker
pour ajouter un tag il faut utiliser
`docker tag`
## docker compose
permet de gérer plusieur container ensemble (IaS)
exemple de fichier de confid en yaml:
```yaml
version: 3
services:
#1 container postgress avec son image et ces variable d'environement
postgres:
image: postgres:10
environment:
POSTGRES_USER: rails_user
POSTGRES_PASSWORD: rails_password
POSTGRES_DB: rails_db
redis:
image: redis:3.2-alpine
rails:
#lance une build en fontion du dockerfile locale
build: .
#n'est construit que lorsque les dependance dont ok
#depends_on crée un pointage au sein du réseau créé par Docker entre conteneur
depends_on:
- postgres
- redis
environment:
DATABASE_URL: 'postgres://rails_user:rails_password@postgres:5432/rails_db'
REDIS_HOST: 'redis:6379'
#monte le répertoire local sur /app
volumes:
- .:/app
nginx:
image: nginx:latest
links:
- rails
ports:
- 3000:80
#on ne monte ici que un fichier de configuration
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
```
- `docker-compose up` démarre les services décrits dans docker-compose.yml et ne rend pas la main.
- `docker-compose up -d` fait la même chose mais rend la main une fois que les services sont démarrés.
- `docker-compose up build` reconstruit les services avant de les lancer.
- `docker-compose down` stoppe les services.
- `docker-compose restart` redémarre lensemble des services.
- `docker-compose restart nginx` redémarre un des service (ici nginx).
- `docker-compose exec rails bash` fournit une console bash au sein du conteneur rails.
- `docker-compose logs` retourne lensemble des logs des services depuis le dernier démarrage et rend la main.
- `docker-compose logs -f` affiche les logs des services et continue à les « écouter » sansrendre la main.
- `docker-compose logs -f rails` fait la même chose pour le conteneur rails uniquement.

View File

@ -0,0 +1,16 @@
# Home ASSistant
## instalation
``` yay home-assistant ```
- need mosquito broker for mqtt
## Cheatsheet
- [icon material](https://cdn.materialdesignicons.com/4.5.95/)
## event
- cube xiaomi xiaomi_aqara.cube_action

8
IT/domotique/index.md Normal file
View File

@ -0,0 +1,8 @@
# domotique
[Hack Xiaomi gateway V2](https://easydomoticz.com/forum/viewtopic.php?t=8397))
## jeedom
- install docker avec database sur host
``` docker run --net host --name jeedom-server --privileged -v /opt/jeedom:/var/www/html -e ROOT_PASSWORD=toto -e APACHE_PORT=9080 jeedom/jeedom:alpha ```
- pour le l'hote SQL mettre 127.0.0.1

332
IT/fail2ban.md Normal file
View File

@ -0,0 +1,332 @@
1. [Les réglages](#Les-reglages)
2. [Fail2Ban cli](#Fail2Ban-cli)
## [](#Les-reglages "Les réglages")Les réglages
Le fichier de configuration global `/etc/fail2ban/fail2ban.conf` ne contient pas grand chose à modifier. Vous pourrez paramétrer lendroit où Fail2Ban doit enregistrer ses logs, la verbosité de ces derniers et modifier les réglages du socket unix. En ce qui me concerne, je ne touche à rien.
`/etc/fail2ban/jail.conf` contient les _jails_ par défaut. Comme il est indiqué en haut du fichier, ce dernier nest pas à modifier directement. On activera les _jails_ dans `/etc/fail2ban/jail.d/defaults-debian.conf` (Debian & Ubuntu), le nom du fichier est certainement différent sur dautres distribs mais vous avez le chemin du répertoire. Vous devriez donc vous y retrouver sans problème.
Dans sa configuration la plus simple, on se contente dactiver les _jails_ proposées par défaut. Voici par exemple une configuration minimaliste mais fonctionnelle.
```
[DEFAULT]
destemail = mon-email@mail.fr
sender = root@domaine.fr
[sshd]
enabled = true
[sshd-ddos]
enabled = true
[recidive]
enabled = true
```
On peut néanmoins y ajouer encore quelques détails. De plus sachez que pour chaque _jail_ ainsi que pour `[DEFAULT]`, vous pouvez préciser `ignoreip`, qui permet, comme son nom lindique, de ne pas considérer certaines ip ou blocs dip. Pratique pour ne pas se retrouver à la porte de son propre serveur.
```
[DEFAULT]
destemail = mon-email@mail.fr
sender = root@domaine.fr
ignoreip = 127.0.0.1/8
[sshd]
enabled = true
[apache]
enabled = true
```
Vous voyez par exemple que jai ici aussi ajouté `apache`. Si vous avez un serveur Apache, cela peut savérer utile. Pensez bien à parcourir le `jail.conf` ainsi que les filtres prédéfinis dans `filter.d` afin de voir ce qui existe par défaut et activez ou non des _jails_ selon vos besoins.
Ce nest pas tout, dautres options courantes sont à connaître :
* `port` permet de préciser les ports à bloquer,
* `logpath` indique le fichier de log à analyser,
* `maxretry` le nombre doccurences dans le fichier de log avant que laction ne soit déclenchée,
* `findtime` permet de spécifier le laps de temps pendant lequel on considère les occurences (au dela de findtime, on repart à zéro),
* `bantime` définit le temps que lip restera bloquée via Fail2Ban.
```
[…]
[sshd]
enabled = true
maxretry = 10
findtime = 120
bantime = 1200
```
Si vous parcourez `/etc/fail2ban/jail.conf`, vous pourrez voir les autres options qui sappliquent par défaut si vous ne les redéfinissez pas.
Enfin, vous pouvez également définir `backend`. Il sagit de la méthode de _surveillance_ des logs. Quatre méthodes sont proposées :
* `pyinotify` : un module Python permettant de monitorer les modifications sur un fichier.
* `gamin` : même usage que le précédent, mais il sagit dun module du projet Gnome.
* `polling` : le fichier est simplement vérifier à intervales régulièrs afin de vérifier sil y a eu des écritures.
* `systemd` : ici, Fail2Ban se greffe sur SystemD afin dêtre alerté de nouveaux logs.
* `auto` : mode automatique, qui va tenter toutes les solutions sus-mentionnées, dans cet ordre.
On peut donc dans la plupart des cas laisser `auto`. Sachez quil est toutefois possible de définir le `backend` au cas par cas au niveau de chaque jail.
### [](#Ajout-de-nouveaux-filtres "Ajout de nouveaux filtres")Ajout de nouveaux filtres
Ce qui est génial avec Fail2Ban, cest quil est possible dajouter autant de _jails_ que lon veut. Si vous savez utiliser les [REGEX](https://buzut.net/la-puissance-des-regex/), vous savez écrire une _jail_ !
Nous commençons par créer un filtre afin de détecter les tentatives de connexion infructueuses. Dans un tel cas, notre applicatif devrait renvoyer une erreur 401. Voici un exemple de log que nous allons matcher :
```
80.214.431.42 - - [14/Oct/2018:21:27:32 +0200] "POST /users/login HTTP/2.0" 401 30 "https://app.buzeo.me/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:63.0) Gecko/20100101 Firefox/63.0" "-"
```
Voici à quoi ressemble notre filtre (`/etc/fail2ban/filter.d/nginx-unauthorized`).
```
[Definition]
failregex = - - \[.*\] ".*" 401
ignoreregex =
```
Cest aussi simple que ça ! `<HOST>`, vous laurez deviné, permet à Fail2Ban de matcher une ip ou un nom dhôte et de capturer cette adresse afin de la bloquer dans `iptables`. Le reste est assez classique. La `failregex` peut contenir plusieurs lignes, dans ce cas, chacune sera matchée de manière indépendante. Quant à l`ignoreregex` son nom est assez explicite et elle permet de ne pas tenir compte dun pattern donné.
Modifions maintenant notre configuration dans `/etc/fail2ban/jail.d/defaults-debian.conf` pour activer notre nouveau filtre.
```
[…]
[nginx-unauthorized]
enabled = true
filter = nginx-unauthorized
port = 80,443
logpath = /var/log/nginx/access.log
maxretry = 5
findtime = 120
bantime = 300
```
## [](#Fail2Ban-cli "Fail2Ban cli")Fail2Ban cli
Fail2Ban possède plusieurs clients fort pratiques. `fail2ban-regex` permet de valider vos filtres et `fail2ban-client` permet de gérer tous les autres aspects du logiciel, de la vérification du contenu dune jail au redémarrage de loutil. Commençons par un rapide aperçu de loutil regex.
```
fail2ban-regex <fichier-de-log | string-représentant-une-ligne-de-log> <chemin-du-filtre | string-regex> [<chemin-du-filtre | string-ignoregex]
fail2ban-regex /var/log/nginx/access.log /etc/fail2ban/filter.d/nginx-unauthorized.conf
```
Ce nest pas un bug, si vous voulez tester une `ignoreregex` depuis un fichier de filtre, il faut renseigner le chemin deux fois. Fort utile, le `fail2ban-regex` nen est pas moins très simple dusage.
Passons maintenant au “centre de commandement” de Fail2ban : `fail2ban-client`. Ce dernier outil permet de vérifier létat de Fail2Ban comme les _jails_ activées, le nombres dip bloquées etc. Voyons les fonctions les plus utiles.
Tout dabord, il est bon de préciser que cet outil peut avantageusement sutiliser en mode interactif, pour lancer le mode interactif, on invoque la commande avec loption `-i`.
### [](#Commandes-de-base "Commandes de base")Commandes de base
Les commandes de bases sont `start`, `reload`, `stop`, `status`, `ping`, `help`. Si la plupart dentre elles se passent dexplications, attardons nous sur
`start`
Lance le serveur et les jails.
`reload`
recharge la configuration.
`reload <JAIL>`
recharge la configuration d'une jail uniquement.
`stop`
Arrête le serveur.
`status`
Retourne le statut du serveur : nombre de jails, filtres, nombres de fails…
`ping`
Permet simplement de s'assurer que le serveur répond.
`help`
Retourne l'ensemble des commandes possibles.
Voici par exemple le retour de la commande `status`, nous nutilisons pas ici le mode interactif.
```
fail2ban-client status
Status
|- Number of jail: 4
`- Jail list: ssh-ddos, nginx-errors, recidive, ssh
```
### [](#Le-logging "Le logging")Le logging
Le logging est le cœur même de Fail2Ban, car sans logs, loutil ne pourrait pas fonctionner. Loutil génère lui-même ses logs, explorables via la cli. Voyons donc quelles sont les options offertes par cette dernière.
`set loglevel`
Définit le niveau de logging.
`get loglevel`
Retourne le niveau de logging.
`set logtarget`
Définit la cible des logs (`STDOUT`, `STDERR`, `SYSLOG` ou chemin vers un fichier).
`get logtarget`
Retourne le chemin du fichier de log (ou autre si ce n'est pas un fichier).
`flushlogs`
Vide le fichier de logs (si disponible). Cette fonction est dédié à la rotation.
### [](#Base-de-donnees "Base de données")Base de données
Fail2Ban possède une base de données interne SQLite. Cette dernière permet de persister des informations entre redémarrages, notamment les ips à bloquer et de recréer les règles iptables au démarrage.
`set dbfile`
Définir la localisation de la base de données.
`get dbfile`
Retourne le chemin de la base de données.
`set dbpurgeage`
Définit la durée de rétention des informations dans la base de données.
`get dbpurgeage`
Récupère le nombre de secondes de rétentions des informations en base de données de la configuration actuelle.
### [](#Controle-et-information-concernant-les-jails "Contrôle et information concernant les jails")Contrôle et information concernant les jails
Le contrôle des prison est le nerf de la guerre dans Fail2Ban. Découvrons quelles sont les actions à notre disposition. Ci dessus les principales actions de contrôle.
`add <JAIL> <BACKEND>`
Active une jail et définit son backend (on mettra \`auto\` la plupart du temps).
`start <JAIL>`
Démarre une jail arrêtée.
`stop <JAIL>`
Arrête et désactive une jail.
`status <JAIL>`
Retourne les détails d'une jail en particulier.
Voici par exemple le statut dune jail.
```
fail2ban-client status ssh
Status for the jail: ssh
|- filter
| |- File list: /var/log/auth.log
| |- Currently failed: 0
| `- Total failed: 4499
`- Actions
|- Currently banned: 1
|- Total banned: 274
`- IP list: 176.140.156.45
```
Voyons maintenant plus en détails comment obtenir des informations détaillées concernant des jails en particulier.
`get <JAIL> logpath`
Retourne le chemin du fichier de log analysé par cette jail.
`get <JAIL> logencoding`
Récupère l'encodage du fichier de log utilisé par cette jail.
`get <JAIL> journalmatch`
Récupére les entrées du fichier de log matchées par cette jail.
`get <JAIL> ignoreip`
Affiche les ips ignorées.
`get <JAIL> ignorecommand`
Affiche les entrée du \`ignorecommand\` de cette jail.
`get <JAIL> failregex`
Affiche la \`failregex\` de cette jail.
`get <JAIL> ignoreregex`
Affiche l'\`ignoreregex\` de cette jail.
`get <JAIL> findtime`
Retourne le délai pris de prise en comtpe de tantatives pour cette jail.
`get <JAIL> bantime`
Retourne la durée de banissement pour cette jail.
`get <JAIL> maxretry`
Retourne le nombre d'erreurs tolérées avant banissement.
`get <JAIL> maxlines`
Retourne le nombre maximum de lignes analysées.
`get <JAIL> actions`
Affiche l'ensemble des actions liées à cette jail.
Enfin, il est également possible daltérer les paramètres des jails directement via la ligne de commande. Bien quon utilise en général directement les fichiers de configuration pour cela, ces commandes peuvent savérer particulièrement utiles afin de bannir ou dé-bannir manuellement des adresses.
```
fail2ban-client set <JAIL> banip <IP>
fail2ban-client set <JAIL> unbanip <IP>
```
Il ne sagit pas dune liste exhaustive des possibilités offertes pas la ligne de commande Fail2Ban. De nombreux paramètres habituellement configurés via les fichiers de configuration sont également paramétrables via la CLI. Pour connaître toutes les possibilités, il suffit dutiliser la commande `help`.
Sachez cependant que Fail2Ban a encore dautres tours dans son sac. En cas de match, il peut accomplir nimporte quel comportement (envoyer un email, rediriger vers une autre ip…). Pour en savoir plus je vous invite à lire la partie concernant les [actions de cet article](https://www.it-connect.fr/filtres-et-actions-personnalises-dans-fail2ban/#IV_Creation_dune_action_personnalisee) et [celui-ci en anglais](https://www.digitalocean.com/community/tutorials/how-fail2ban-works-to-protect-services-on-a-linux-server#examining-the-action-file).

43
IT/git.md Normal file
View File

@ -0,0 +1,43 @@
# git
## delete a file only from git repository
`git rm -rf --cached`
## delete a file from complete history
be carefull can't push repo last time use
`git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch logbook.md" HEAD `
## example de déclencheur git
```
#!/bin/bash
GIT_REPO=`pwd`
SITE_NAME=notebook
TMP_GIT_CLONE="/tmp/$SITE_NAME"
PUBLIC_WWW="/usr/share/nginx/html/$SITE_NAME"
# get branch name
if ! [ -t 0 ]; then
read -a ref
fi
IFS='/' read -ra REF <<< "${ref[2]}"
branch="${REF[2]}"
if [ "master" == "$branch" ]; then
mkdir -p $TMP_GIT_CLONE
echo "download repo"
git clone --recursive $GIT_REPO $TMP_GIT_CLONE
cd $TMP_GIT_CLONE
export PATH="$HOME/.local/bin:$PATH"
make install -e BUILDDIR=$PUBLIC_WWW
echo "Cleaning up"
rm -Rf $TMP_GIT_CLONE
fi
exit
```

View File

@ -0,0 +1,34 @@
```
if ( !-d $request_filename ) {
rewrite ^/ampache/rest/(.*)\.view$ /ampache/rest/index.php?action=$1 last;
rewrite ^/ampache/rest/play/(.+)$ /ampache/play/$1 last;
}
rewrite ^/ampache/play/ssid/(\w+)/type/(\w+)/oid/([0-9]+)/uid/([0-9]+)/name/(.*)$ xi/ampache/play/index.php?ssid=$1&type=$2&oid=$3&uid=$4&name=$5 last;
rewrite ^/ampache/play/ssid/(\w+)/type/(\w+)/oid/([0-9]+)/uid/([0-9]+)/client/(.*)/noscrobble/([0-1])/name/(.*)$ /ampache/play/index.php?ssid=$1&type=$2&oid=$3&uid=$4&client=$5&noscrobble=$6&name=$7 last;
rewrite ^/ampache/play/ssid/(.*)/type/(.*)/oid/([0-9]+)/uid/([0-9]+)/client/(.*)/noscrobble/([0-1])/player/(.*)/name/(.*)$ /ampache/play/index.php?ssid=$1&type=$2&oid=$3&uid=$4&client=$5&noscrobble=$6&player=$7&name=$8 last;
rewrite ^/ampache/play/ssid/(.*)/type/(.*)/oid/([0-9]+)/uid/([0-9]+)/client/(.*)/noscrobble/([0-1])/bitrate/([0-9]+)/player/(.*)/name/(.*)$ /ampache/play/index.php?ssid=$1&type=$2&oid=$3&uid=$4&client=$5&noscrobble=$6&bitrate=$7player=$8&name=$9 last;
rewrite ^/ampache/play/ssid/(.*)/type/(.*)/oid/([0-9]+)/uid/([0-9]+)/client/(.*)/noscrobble/([0-1])/transcode_to/(w+)/bitrate/([0-9]+)/player/(.*)/name/(.*)$ /ampache/play/index.php?ssid=$1&type=$2&oid=$3&uid=$4&client=$5&noscrobble=$6&transcode_to=$7&bitrate=$8&player=$9&name=$10 last;
# The following line necessary for me to be able to download single songs
rewrite ^/ampache/play/ssid/(.*)/type/(.*)/oid/([0-9]+)/uid/([0-9]+)/action/(.*)/name/(.*)$ /ampache/play/index.php?ssid=$1&type=$2&oid=$3&uid=$4action=$5&name=$6 last;
# used for transfering art work to some clients, seems not to work for clementine because of an clementine-internal issue
location /ampache/play {
if (!-e $request_filename) {
rewrite ^/ampache/play/art/([^/]+)/([^/]+)/([0-9]+)/thumb([0-9]*)\.([a-z]+)$ /ampache/image.php?object_type=$2&object_id=$3&auth=$1;
break;
}
rewrite ^/([^/]+)/([^/]+)(/.*)?$ /ampache/play/$3?$1=$2;
rewrite ^/(/[^/]+|[^/]+/|/?)$ /ampache/play/index.php last;
break;
}
location /ampache/rest {
limit_except GET POST {
deny all;
}
}
```

57
IT/http/index.md Normal file
View File

@ -0,0 +1,57 @@
## 1×× Informational
- 100 Continue
This interim response indicates that everything so far is OK and that the client should continue with the request or ignore it if it is already finished.
## 2×× Success
- 200 OK
The request has succeeded. The meaning of a success varies depending on the HTTP method:
GET: The resource has been fetched and is transmitted in the message body.
HEAD: The entity headers are in the message body.
POST: The resource describing the result of the action is transmitted in the message body.
TRACE: The message body contains the request message as received by the server
- 201 Created
The request has succeeded and a new resource has been created as a result of it. This is typically the response sent after a PUT request.
- 202 Accepted
The request has been received but not yet acted upon. It is non-committal, meaning that there is no way in HTTP to later send an asynchronous response indicating the outcome of processing the request. It is intended for cases where another process or server handles the request, or for batch processing.
- 204 No content
There is no response body. We can use when we have PUT, DELETE.
- 207 Multi-Status
The response body contains multiple status informations for different parts of a batch/bulk request
## 3×× Redirection
- 300 Multiple Choices
The request has more than one possible responses. User-agent or user should choose one of them. There is no standardized way to choose one of the responses.
- 301 Moved Permanently
This response code means that URI of requested resource has been changed. Probably, new URI would be given in the response.
- 302 Found
This response code means that URI of requested resource has been changed temporarily. New changes in the URI might be made in the future. Therefore, this same URI should be used by the client in future requests.
- 303 See Other
Server sent this response to directing client to get requested resource to another URI with an GET request.
- 304 Not Modified
This is used for caching purposes. It is telling to client that response has not been modified. So, client can continue to use same cached version of response.
## 4×× Client Error
- 400 Bad Request
This response means that server could not understand the request due to invalid syntax.
- 401 Unauthorized
Authentication is needed to get requested response. This is similar to 403, but in this case, authentication is possible.
- 403 Forbidden
Client does not have access rights to the content so server is rejecting to give proper response.
- 404 Not Found
Server can not find requested resource. This response code probably is most famous one due to its frequency to occur in web.
- 405 Method Not Allowed
The request method is known by the server but has been disabled and cannot be used. The two mandatory methods, GET and HEAD, must never be disabled and should not return this error code.
- 429 Too many requests
The client does not consider rate limiting and sent too many requests.
## 5×× Server Error
- 500 Internal Server Error
The server has encountered a situation it doesn't know how to handle.
- 501 Not Implemented
The request method is not supported by the server and cannot be handled. The only methods that servers are required to support (and therefore that must not return this code) are GET and HEAD.
- 502 Bad Gateway
This error response means that the server, while working as a gateway to get a response needed to handle the request, got an invalid response.
- 503 Service Unavailable
The server is not ready to handle the request. Common causes are a server that is down for maintenance or that is overloaded. Note that together with this response, a user-friendly page explaining the problem should be sent. This responses should be used for temporary conditions and the Retry-After: HTTP header should, if possible, contain the estimated time before the recovery of the service. The webmaster must also take care about the caching-related headers that are sent along with this response, as these temporary condition responses should usually not be cached.
For more info: https://httpstatuses.com/

19
IT/http/nginx.md Normal file
View File

@ -0,0 +1,19 @@
# nginx
## reverse proxy
``` location /chainetv {
proxy_pass http://unix:/run/gunicorn/socket:/;
#proxy_set_header Host $host;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Scheme $scheme;
#proxy_set_header X-Script-Name /chainetv;
}
```
- le :/ aprés le socket sert a envoyer le sous répertoire a la suite de l'adresse demander l'appli écoute donc la route /api
- la parie commenté a le même fonctionnement mais permet un réglage plus fin associer a flask et la classe ReverseProxied par exemple:
la location peut rediriger /chaine/api en metant le X-Script-ame a /chainetv l'appli peut donc écouter la route /api
## Links
- [Nginx configuration generator](https://github.com/valentinxxx/nginxconfig.io)
- [Nginx Quick Reference](https://github.com/trimstray/nginx-quick-reference) ([HN](https://news.ycombinator.com/item?id=19112090)) - Notes describes how to improve Nginx performance, security and other important things.

12
IT/hugo.md Normal file
View File

@ -0,0 +1,12 @@
## hugo
hugo is a static website generator
## Usefull command
- `hugo server -b localhost/starter` launch a test server on your local machine
- -b is for baseURL option depend to configuration
- you can add -t with theme name if you have few theme installed
- `hugo new site .` create a new site skeleton
- `hugo new test.md ` create a new file in content folder
- `hugo` compile site

189
IT/install_nextcloud.md Normal file
View File

@ -0,0 +1,189 @@
# instalation NExt cloud
## hook pacman
- /etc/pacman.d/hooks/nextcloud.hook
```
[Trigger]
Operation = Install
Operation = Upgrade
Type = Package
Target = nextcloud
Target = nextcloud-app-*
[Action]
Description = Update Nextcloud installation
When = PostTransaction
Exec = /usr/bin/runuser -u http -- /usr/bin/php /usr/share/webapps/nextcloud/occ upgrade
```
## php
- owner http /usr/share/webapps/nextcloud
- Install PHP#gd and php-intl as additional modules. Configure OPcache as recommended by the documentation. Some apps (News for example) require the iconv extension, if you wish to use these apps, uncomment the extension in /etc/php/php.ini.
in php.ini
```
change memory_limit = 256M
```
- cree repertoire data
```
mkdir /var/nextcloud
chown http:http /var/nextcloud
chmod 750 /var/nextcloud
```
- overide php-fpm
```
[Service]
ReadWritePaths = /usr/share/webapps/nextcloud/apps
ReadWritePaths = /etc/webapps/nextcloud/config
ReadWritePaths = /var/nextcloud
ReadWritePaths = /mnt/diskstation
```
## config NGINX dédier:
```
upstream php-handler {
# server 127.0.0.1:9000;
server unix:/var/run/php-fpm/php-fpm.sock;
}
server {
listen 80;
listen [::]:80;
server_name _;
# Add headers to serve security related headers
# Before enabling Strict-Transport-Security headers please read into this
# topic first.
# add_header Strict-Transport-Security "max-age=15768000;
# includeSubDomains; preload;";
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
# Path to the root of your installation
root /usr/share/webapps/nextcloud/;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json
# last;
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
# set max upload size
client_max_body_size 512M;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Uncomment if your server is build with the ngx_pagespeed module
# This module is currently not supported.
#pagespeed off;
location / {
rewrite ^ /index.php$request_uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
#fastcgi_param HTTPS on;
#Avoid sending the security headers twice
fastcgi_param modHeadersAvailable true;
fastcgi_param front_controller_active true;
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
# Adding the cache control header for js and css files
# Make sure it is BELOW the PHP block
location ~ \.(?:css|js|woff|svg|gif)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
# Add headers to serve security related headers (It is intended to
# have those duplicated to the ones above)
# Before enabling Strict-Transport-Security headers please read into
# this topic first.
# add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
# Optional: Don't log access to assets
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$request_uri;
# Optional: Don't log access to other assets
access_log off;
}
}
```
- pour ssl [voir](https://docs.nextcloud.com/server/12/admin_manual/installation/nginx.html
)
## gestion database
## share
- reconfigurer les share serveur en NFS pour que http est accé en lecture (analize a faire pour l'inpact sur les autre service)
# fichier de config
# other
- mettre en place cron de scan de fichier:
```
php /usr/share/webapps/nextcloud/occ files:scan --all
```
- Configurer fail2ban

5
IT/linux/AUR.md Normal file
View File

@ -0,0 +1,5 @@
# AUR
## update .SRCINFO
`makepkg --printsrcinfo > .SRCINFO`

26
IT/linux/LVM.md Normal file
View File

@ -0,0 +1,26 @@
# LVM
## reduce logical volume
You must get a blank output from this command as below
`lsof /home`
Next un-mount the partition
`umount /home`
check filesystem
`e2fsck -flinux show file syste /dev/mapper/arch-home`
resize file system
`resize2fs /dev/mapper/arch-home 20G`
reduce logical volume
`lvreduce /dev/mapper/arch-home -L 20G`
## increase logical volume
`lvextend -L +44G /dev/mapper/arch-root`
`resize2fs /dev/mapper/arch-root`

9
IT/linux/arch.md Normal file
View File

@ -0,0 +1,9 @@
## arch tips
#### clés GPG corrompue ou usagé
`sudo pacman -S archlinux-keyring`
#### Personaliser un iso arch
[archiso](https://wiki.archlinux.org/index.php/Archiso)

133
IT/linux/arch_install.md Normal file
View File

@ -0,0 +1,133 @@
---
title: arch_install
---
The following is a brief installation tutorial for [Arch Linux][1]. It assumes
familiarity with the Arch [Beginner's Guide][2] and [Installation Guide][3].
It will provide a system using [LVM on LUKS][4].
Note that this guide assumes you are performing the install to `/dev/sda`. In
some cases, you may find that your USB install disk claimed `/dev/sda` and you
want to install to `/dev/sdb`. Confirm which disk is which before proceeding.
Boot into the Arch installer.
If your console font is tiny ([HiDPI][7] systems), set a new font.
$ setfont sun12x22
Connect to the Internet.
Verify that the [system clock is up to date][8].
$ timedatectl set-ntp true
(bios mode)
$ parted -s /dev/sda mklabel msdos
$ parted -s /dev/sda mkpart primary 1MiB 513MiB
$ parted -s /dev/sda mkpart primary 1024MiB 100%
$ mkfs.ext2 /dev/sda1
$ mkfs.ext4 /dev/sda2
(UEFI mode) Create partitions for EFI, boot, and root.
$ parted -s /dev/sda mklabel gpt
$ parted -s /dev/sda mkpart primary fat32 1MiB 513MiB
$ parted -s /dev/sda set 1 boot on
$ parted -s /dev/sda set 1 esp on
$ parted -s /dev/sda mkpart primary 513MiB 1024MiB
$ parted -s /dev/sda mkpart primary 1024MiB 100%
$ mkfs.ext4 /dev/sda2
$ mkfs.fat -F32 /dev/sda1
Create and mount the encrypted root filesystem. Note that for UEFI systems
this will be partition 3.
$ pvcreate /dev/sda3pc
$ vgcreate arch /dev/mapper/lvm
$ lvcreate -L 4G arch -n swap
$ lvcreate -L 30G arch -n root
$ lvcreate -l +100%FREE arch -n home
$ lvdisplay
$ mkswap -L swap /dev/mapper/arch-swap
$ mkfs.ext4 /dev/mapper/arch-root
$ mkfs.ext4 /dev/mapper/arch-home
$ mount /dev/mapper/arch-root /mnt
$ mkdir /mnt/home
$ mount /dev/mapper/arch-home /mnt/home
$ swapon /dev/mapper/arch-swap
(UEFI mode) Encrypt the boot partition using a separate passphrase from
the root partition, then mount the boot and EFI partitions.
$ mkdir /mnt/boot
$ mount /dev/sda2 /mnt/boot
$ mkdir /mnt/boot/efi
$ mount /dev/sda1 /mnt/boot/efi
Optionally [edit the mirror list][9].
$ vi /etc/pacman.d/mirrorlist
Install the [base system][10].
$ pacstrap -i /mnt base base-devel net-tools wireless_tools dialog wpa_supplicant openssh git grub ansible
(UEFI mode) $ pacstrap /mnt efibootmgr
Generate and verify [fstab][11].
$ genfstab -U -p /mnt >> /mnt/etc/fstab
$ less /mnt/etc/fstab
Change root into the base install and perform [base configuration tasks][12].
$ arch-chroot /mnt /bin/bash
$ systemctl enable dhcpcd.service
$ systemctl enable sshd.service
$ passwd
modifier /etc/ssh/sshd_config et mettre PermitRoorlogin yes
Set your mkinitcpio.
# only for UEFI
$ sed -i 's/^HOOKS=.*/HOOKS="base udev autodetect modconf block keyboard lvm2 resume filesystems fsck"/' /etc/mkinitcpio.conf
# for both
$ mkinitcpio -p linux
Configure GRUB.
# BIOS mode
$ grub-install /dev/sda
$ grub-mkconfig -o /boot/grub/grub.cfg
# UEFI mode
$ grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=grub --recheck
$ grub-mkconfig -o /boot/grub/grub.cfg
$ chmod -R g-rwx,o-rwx /boot
Cleanup and reboot!
$ exit
$ umount -R /mnt
$ reboot
Run ansible!
[1]: https://www.archlinux.org/
[2]: https://wiki.archlinux.org/index.php/Beginners'_guide
[3]: https://wiki.archlinux.org/index.php/Installation_guide
[4]: https://wiki.archlinux.org/index.php/Encrypted_LVM#LVM_on_LUKS
[5]: http://www.pavelkogan.com/2014/05/23/luks-full-disk-encryption/
[6]: https://wiki.archlinux.org/index.php/Dm-crypt/Encrypting_an_entire_system#Encrypted_boot_partition_.28GRUB.29
[7]: https://wiki.archlinux.org/index.php/HiDPI
[8]: https://wiki.archlinux.org/index.php/Beginners'_guide#Update_the_system_clock
[9]: https://wiki.archlinux.org/index.php/Beginners'_Guide#Select_a_mirror
[10]: https://wiki.archlinux.org/index.php/Beginners'_Guide#Install_the_base_system
[11]: https://wiki.archlinux.org/index.php/Beginners'_guide#Generate_an_fstab
[12]: https://wiki.archlinux.org/index.php/Beginners'_guide#Configure_the_base_system

45
IT/linux/cron.md Normal file
View File

@ -0,0 +1,45 @@
# cron
```
# * * * * * command to execute
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ └───── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0)
# │ │ │ └────────── month (1 - 12)
# │ │ └─────────────── day of month (1 - 31)
# │ └──────────────────── hour (0 - 23)
# └───────────────────────── min (0 - 59)
```
See also
- [wikipedia/cron](http://en.wikipedia.org/wiki/Cron)
## cron directories
You can create directories that run cron commands. Taken from the `Ubuntu`
distro:
```txt
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
```
## run every weekday at midnight
```sh
0 0 * * 0-5
```
## edit cronjobs
```sh
$ sudo crontab -e
```

28
IT/linux/grubshell.md Normal file
View File

@ -0,0 +1,28 @@
## grub_shell
# booter depuis le shell grub
- les commande **cat** et **ls** fonctionne
- trouver partition de boot
- mettre la partion en racine `set root=(hd0,1)`
- charger le kernel ce trouvant sur cette partition et setter la partition racine: `linux /vmlinuz root=/dev/sda3`
- charger le fichier image correspondant au kernel :`initrd /boot/initrd.img`
- enter `boot` to boot
## regénérer config
`grub-mkconfig -o /boot/grub/grub.cfg`
## install sur MBR
`grub-install --target=i386-pc /dev/sdX`
## meme chose depuis grub rescue
`grub rescue> set prefix=(hd0,1)/boot/grub
grub rescue> set root=(hd0,1)
grub rescue> insmod normal
grub rescue> normal
grub rescue> insmod linux
grub rescue> linux /boot/vmlinuz-3.13.0-29-generic root=/dev/sda1
grub rescue> initrd /boot/initrd.img-3.13.0-29-generic
grub rescue> boot`
[archwiki](https://wiki.archlinux.org/index.php/GRUB)

48
IT/linux/linux.md Normal file
View File

@ -0,0 +1,48 @@
## retrouvé date t'instalation
`ls -lct /etc | tail -1 | awk '{print $6, $7, $8}'`
## enable Wayland support:
- execute `ln -s /dev/null /etc/udev/rules.d/61-gdm.rules` to avoid that gnome test your nvidia Driver
- add `nvidia-drm.modeset=1` to GRUB_CMDLINE_LINUX_DEFAULT in*/etc/default/grub*
- update grub config `grub-mkconfig -o /boot/grub/grub.cfg` in root
- add nvidia module to mkinitcpio.cong:
`MODULES=(nvidia nvidia_modeset nvidia_uvm nvidia_drm)`
- create pacman hook:
```
[Trigger]
Operation=Install
Operation=Upgrade
Operation=Remove
Type=Package
Target=nvidia
Target=linux
# Change the linux part above and in the Exec line if a different kernel is used
[Action]
Description=Update Nvidia module in initcpio
Depends=mkinitcpio
When=PostTransaction
NeedsTargets
Exec=/bin/sh -c 'while read -r trg; do case $trg in linux) exit 0; esac; done; /usr/bin/mkinitcpio -P'
```
## see if you run on wayland:
`loginctl show-session $(loginctl G $USERNAME|awk '{print $1}') -p Type`
## boot in emergency target
1. (Re)boot the machine
2. Wait for the grub menu to appear, then hit “e”,
3. scroll down to the “linux16” line then press the end key
4. type the following:
systemd.unit=emergency.target
5. press ctrl+x
6. put root writable with `mount -o rw,remount /`

View File

@ -0,0 +1,8 @@
- modifier le nat de la box
- modifier le Cname www sur dns
- ajouter le nouveau serveur au autorisation de réplication DNS si serveur DNS Slave
- ajouter au autorisation NFS des share NFS (git,Cardav)
- recréer partage syncthing
- changer dns secondaire
- rediriger backup
- remettre le cron check server && dump SQL

3
IT/motorola.md Normal file
View File

@ -0,0 +1,3 @@
#Motorola
[tuto root](https://forum.xda-developers.com/motorola-one-pro/development/root-twrp-3-4-0-android-10-t4156037)

View File

@ -0,0 +1 @@
[patron de conception](https://fr.m.wikibooks.org/wiki/Patrons_de_conception/)

0
IT/python/flask.md Normal file
View File

14
IT/python/index.md Normal file
View File

@ -0,0 +1,14 @@
# python
## référence
https://github.com/gto76/python-cheatsheet/blob/master/README.md
## Virtual env
- create virtual env: `python3 -m venv /path`
- enter in virtual env: `source env/bin/activate`
- leave venv: `deactivate`
- list all installed package in file: `pip freeze > requirement.txt`
- install package list in file: `pip install -r requirement.txt`
## tips
- pprint permet d afficher vos structure de données
- la fonction dir() permet de lister les attribut d un

9
IT/raspberry.md Normal file
View File

@ -0,0 +1,9 @@
#raspberry.md
## activer sortie audio jack
/boot/config.txt
```
device_tree_param=audio=on
audio_pwm_mode=2
```

2
IT/regex.md Normal file
View File

@ -0,0 +1,2 @@
[regex prefaite](https://ihateregex.io/?q=)
[regex101](https://regex101.com/)

17
IT/ssh.md Normal file
View File

@ -0,0 +1,17 @@
# ssh
## port forwarding
```
Ssh -L 2000:toto:80 Vincent@tata
```
Redirigé le port local 2000 vers le port 80 de toto en passant par un tunnel via tata
-f pour laisser la commande en arrière plan
-R pour forwarder un port distant ici avec -R le port 2000 de tata sera rediriger vers le 80 de toto en passant par le client
## ingore know host file
```
ssh -o GlobalKnownHostsFile=/dev/null -o UserKnownHostsFile=/dev/null
```

15
IT/syncthing.MD Normal file
View File

@ -0,0 +1,15 @@
# list syncthing share on main Server
- Pi2:
-name:beet_db
path:/mnt/diskstation/home/beet_db
type:sendreceive
-name:notes
path:/mnt/diskstation/home/notes2
type:sendreceive
-name:Caméra
path:/mnt/diskstation/photo/backup
type:receiveonly
-name:keepass
path:/mnt/diskstation/home/keepass
type:sendreceive

8
IT/synologyTips.md Normal file
View File

@ -0,0 +1,8 @@
## Tips on synology NAS
### debug the ssl certificate renew on synology
in case of error in the renewal of synology certificate with let's encrypt proceed like this:
- connect to synology with SSH
- enter command ```sudo /usr/syno/sbin/syno-letsencrypt renew-all -v```

Some files were not shown because too many files have changed in this diff Show More